00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 170 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3671 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.152 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.153 The recommended git tool is: git 00:00:00.154 using credential 00000000-0000-0000-0000-000000000002 00:00:00.156 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.521 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.532 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.543 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.543 > git config core.sparsecheckout # timeout=10 00:00:07.552 > git read-tree -mu HEAD # timeout=10 00:00:07.567 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.589 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.590 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.682 [Pipeline] Start of Pipeline 00:00:07.697 [Pipeline] library 00:00:07.699 Loading library shm_lib@master 00:00:07.699 Library shm_lib@master is cached. Copying from home. 00:00:07.718 [Pipeline] node 00:00:07.734 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.736 [Pipeline] { 00:00:07.747 [Pipeline] catchError 00:00:07.748 [Pipeline] { 00:00:07.762 [Pipeline] wrap 00:00:07.771 [Pipeline] { 00:00:07.780 [Pipeline] stage 00:00:07.782 [Pipeline] { (Prologue) 00:00:07.804 [Pipeline] echo 00:00:07.806 Node: VM-host-SM38 00:00:07.811 [Pipeline] cleanWs 00:00:07.822 [WS-CLEANUP] Deleting project workspace... 00:00:07.822 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.829 [WS-CLEANUP] done 00:00:08.114 [Pipeline] setCustomBuildProperty 00:00:08.199 [Pipeline] httpRequest 00:00:08.732 [Pipeline] echo 00:00:08.734 Sorcerer 10.211.164.101 is alive 00:00:08.745 [Pipeline] retry 00:00:08.747 [Pipeline] { 00:00:08.764 [Pipeline] httpRequest 00:00:08.770 HttpMethod: GET 00:00:08.770 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.771 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.790 Response Code: HTTP/1.1 200 OK 00:00:08.791 Success: Status code 200 is in the accepted range: 200,404 00:00:08.791 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.777 [Pipeline] } 00:00:12.795 [Pipeline] // retry 00:00:12.803 [Pipeline] sh 00:00:13.088 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.107 [Pipeline] httpRequest 00:00:15.561 [Pipeline] echo 00:00:15.564 Sorcerer 10.211.164.101 is alive 00:00:15.575 [Pipeline] retry 00:00:15.578 [Pipeline] { 00:00:15.596 [Pipeline] httpRequest 00:00:15.601 HttpMethod: GET 00:00:15.602 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:15.602 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:15.609 Response Code: HTTP/1.1 200 OK 00:00:15.610 Success: Status code 200 is in the accepted range: 200,404 00:00:15.610 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:02:43.671 [Pipeline] } 00:02:43.687 [Pipeline] // retry 00:02:43.694 [Pipeline] sh 00:02:43.977 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:02:46.528 [Pipeline] sh 00:02:46.824 + git -C spdk log --oneline -n5 00:02:46.824 b18e1bd62 version: v24.09.1-pre 00:02:46.824 19524ad45 version: v24.09 00:02:46.824 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:02:46.824 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:02:46.824 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:02:46.843 [Pipeline] withCredentials 00:02:46.855 > git --version # timeout=10 00:02:46.869 > git --version # 'git version 2.39.2' 00:02:46.888 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:46.891 [Pipeline] { 00:02:46.900 [Pipeline] retry 00:02:46.902 [Pipeline] { 00:02:46.918 [Pipeline] sh 00:02:47.202 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:47.214 [Pipeline] } 00:02:47.231 [Pipeline] // retry 00:02:47.237 [Pipeline] } 00:02:47.254 [Pipeline] // withCredentials 00:02:47.264 [Pipeline] httpRequest 00:02:47.664 [Pipeline] echo 00:02:47.666 Sorcerer 10.211.164.101 is alive 00:02:47.676 [Pipeline] retry 00:02:47.679 [Pipeline] { 00:02:47.692 [Pipeline] httpRequest 00:02:47.696 HttpMethod: GET 00:02:47.697 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:47.698 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:47.699 Response Code: HTTP/1.1 200 OK 00:02:47.700 Success: Status code 200 is in the accepted range: 200,404 00:02:47.700 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:53.945 [Pipeline] } 00:02:53.961 [Pipeline] // retry 00:02:53.968 [Pipeline] sh 00:02:54.252 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:55.680 [Pipeline] sh 00:02:56.002 + git -C dpdk log --oneline -n5 00:02:56.002 eeb0605f11 version: 23.11.0 00:02:56.002 238778122a doc: update release notes for 23.11 00:02:56.002 46aa6b3cfc doc: fix description of RSS features 00:02:56.002 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:56.002 7e421ae345 devtools: support skipping forbid rule check 00:02:56.021 [Pipeline] writeFile 00:02:56.038 [Pipeline] sh 00:02:56.322 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:56.337 [Pipeline] sh 00:02:56.620 + cat autorun-spdk.conf 00:02:56.620 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.620 SPDK_TEST_NVME=1 00:02:56.620 SPDK_TEST_FTL=1 00:02:56.620 SPDK_TEST_ISAL=1 00:02:56.620 SPDK_RUN_ASAN=1 00:02:56.620 SPDK_RUN_UBSAN=1 00:02:56.620 SPDK_TEST_XNVME=1 00:02:56.620 SPDK_TEST_NVME_FDP=1 00:02:56.620 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:56.620 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.620 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.628 RUN_NIGHTLY=1 00:02:56.630 [Pipeline] } 00:02:56.647 [Pipeline] // stage 00:02:56.665 [Pipeline] stage 00:02:56.668 [Pipeline] { (Run VM) 00:02:56.682 [Pipeline] sh 00:02:56.969 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:56.969 + echo 'Start stage prepare_nvme.sh' 00:02:56.969 Start stage prepare_nvme.sh 00:02:56.969 + [[ -n 8 ]] 00:02:56.969 + disk_prefix=ex8 00:02:56.969 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:56.969 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:56.969 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:56.969 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.969 ++ SPDK_TEST_NVME=1 00:02:56.969 ++ SPDK_TEST_FTL=1 00:02:56.969 ++ SPDK_TEST_ISAL=1 00:02:56.969 ++ SPDK_RUN_ASAN=1 00:02:56.969 ++ SPDK_RUN_UBSAN=1 00:02:56.969 ++ SPDK_TEST_XNVME=1 00:02:56.969 ++ SPDK_TEST_NVME_FDP=1 00:02:56.969 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:56.969 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.969 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.969 ++ RUN_NIGHTLY=1 00:02:56.969 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:56.969 + nvme_files=() 00:02:56.969 + declare -A nvme_files 00:02:56.969 + backend_dir=/var/lib/libvirt/images/backends 00:02:56.969 + nvme_files['nvme.img']=5G 00:02:56.969 + nvme_files['nvme-cmb.img']=5G 00:02:56.969 + nvme_files['nvme-multi0.img']=4G 00:02:56.969 + nvme_files['nvme-multi1.img']=4G 00:02:56.969 + nvme_files['nvme-multi2.img']=4G 00:02:56.969 + nvme_files['nvme-openstack.img']=8G 00:02:56.969 + nvme_files['nvme-zns.img']=5G 00:02:56.969 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:56.969 + (( SPDK_TEST_FTL == 1 )) 00:02:56.969 + nvme_files["nvme-ftl.img"]=6G 00:02:56.969 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:56.969 + nvme_files["nvme-fdp.img"]=1G 00:02:56.969 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:56.969 + for nvme in "${!nvme_files[@]}" 00:02:56.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:02:57.230 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:57.230 + for nvme in "${!nvme_files[@]}" 00:02:57.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:02:58.171 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:58.171 + for nvme in "${!nvme_files[@]}" 00:02:58.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:02:58.171 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:58.171 + for nvme in "${!nvme_files[@]}" 00:02:58.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:02:58.171 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:58.171 + for nvme in "${!nvme_files[@]}" 00:02:58.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:02:58.171 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:58.171 + for nvme in "${!nvme_files[@]}" 00:02:58.171 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:02:58.430 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:58.430 + for nvme in "${!nvme_files[@]}" 00:02:58.430 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:02:58.689 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:58.689 + for nvme in "${!nvme_files[@]}" 00:02:58.689 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:02:58.950 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:58.950 + for nvme in "${!nvme_files[@]}" 00:02:58.950 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:02:59.521 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:59.521 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:02:59.521 + echo 'End stage prepare_nvme.sh' 00:02:59.521 End stage prepare_nvme.sh 00:02:59.534 [Pipeline] sh 00:02:59.819 + DISTRO=fedora39 00:02:59.819 + CPUS=10 00:02:59.819 + RAM=12288 00:02:59.819 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:59.819 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:59.819 00:02:59.819 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:59.819 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:59.819 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:59.819 HELP=0 00:02:59.819 DRY_RUN=0 00:02:59.819 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:02:59.819 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:59.819 NVME_AUTO_CREATE=0 00:02:59.819 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:02:59.819 NVME_CMB=,,,, 00:02:59.819 NVME_PMR=,,,, 00:02:59.819 NVME_ZNS=,,,, 00:02:59.819 NVME_MS=true,,,, 00:02:59.819 NVME_FDP=,,,on, 00:02:59.819 SPDK_VAGRANT_DISTRO=fedora39 00:02:59.819 SPDK_VAGRANT_VMCPU=10 00:02:59.819 SPDK_VAGRANT_VMRAM=12288 00:02:59.819 SPDK_VAGRANT_PROVIDER=libvirt 00:02:59.819 SPDK_VAGRANT_HTTP_PROXY= 00:02:59.819 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:59.819 SPDK_OPENSTACK_NETWORK=0 00:02:59.819 VAGRANT_PACKAGE_BOX=0 00:02:59.819 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:59.819 FORCE_DISTRO=true 00:02:59.819 VAGRANT_BOX_VERSION= 00:02:59.819 EXTRA_VAGRANTFILES= 00:02:59.819 NIC_MODEL=e1000 00:02:59.819 00:02:59.819 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:59.819 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:02.360 Bringing machine 'default' up with 'libvirt' provider... 00:03:02.620 ==> default: Creating image (snapshot of base box volume). 00:03:02.881 ==> default: Creating domain with the following settings... 00:03:02.881 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732705111_9a945fa5e0c6caae4ae1 00:03:02.881 ==> default: -- Domain type: kvm 00:03:02.881 ==> default: -- Cpus: 10 00:03:02.881 ==> default: -- Feature: acpi 00:03:02.881 ==> default: -- Feature: apic 00:03:02.881 ==> default: -- Feature: pae 00:03:02.881 ==> default: -- Memory: 12288M 00:03:02.881 ==> default: -- Memory Backing: hugepages: 00:03:02.881 ==> default: -- Management MAC: 00:03:02.881 ==> default: -- Loader: 00:03:02.881 ==> default: -- Nvram: 00:03:02.881 ==> default: -- Base box: spdk/fedora39 00:03:02.881 ==> default: -- Storage pool: default 00:03:02.881 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732705111_9a945fa5e0c6caae4ae1.img (20G) 00:03:02.881 ==> default: -- Volume Cache: default 00:03:02.881 ==> default: -- Kernel: 00:03:02.881 ==> default: -- Initrd: 00:03:02.881 ==> default: -- Graphics Type: vnc 00:03:02.881 ==> default: -- Graphics Port: -1 00:03:02.881 ==> default: -- Graphics IP: 127.0.0.1 00:03:02.881 ==> default: -- Graphics Password: Not defined 00:03:02.881 ==> default: -- Video Type: cirrus 00:03:02.881 ==> default: -- Video VRAM: 9216 00:03:02.881 ==> default: -- Sound Type: 00:03:02.881 ==> default: -- Keymap: en-us 00:03:02.881 ==> default: -- TPM Path: 00:03:02.881 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:02.881 ==> default: -- Command line args: 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:02.881 ==> default: -> value=-drive, 00:03:02.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:02.881 ==> default: -> value=-drive, 00:03:02.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:02.881 ==> default: -> value=-drive, 00:03:02.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.881 ==> default: -> value=-drive, 00:03:02.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.881 ==> default: -> value=-drive, 00:03:02.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:02.881 ==> default: -> value=-drive, 00:03:02.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:02.881 ==> default: -> value=-device, 00:03:02.881 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:03.142 ==> default: Creating shared folders metadata... 00:03:03.142 ==> default: Starting domain. 00:03:04.527 ==> default: Waiting for domain to get an IP address... 00:03:26.542 ==> default: Waiting for SSH to become available... 00:03:26.542 ==> default: Configuring and enabling network interfaces... 00:03:28.459 default: SSH address: 192.168.121.241:22 00:03:28.459 default: SSH username: vagrant 00:03:28.459 default: SSH auth method: private key 00:03:30.377 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:38.512 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:03:43.803 ==> default: Mounting SSHFS shared folder... 00:03:45.721 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:45.721 ==> default: Checking Mount.. 00:03:46.664 ==> default: Folder Successfully Mounted! 00:03:46.664 00:03:46.664 SUCCESS! 00:03:46.664 00:03:46.664 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:46.664 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:46.664 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:46.664 00:03:46.675 [Pipeline] } 00:03:46.691 [Pipeline] // stage 00:03:46.700 [Pipeline] dir 00:03:46.701 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:46.703 [Pipeline] { 00:03:46.715 [Pipeline] catchError 00:03:46.716 [Pipeline] { 00:03:46.729 [Pipeline] sh 00:03:47.015 + vagrant ssh-config --host vagrant 00:03:47.015 + sed -ne '/^Host/,$p' 00:03:47.015 + tee ssh_conf 00:03:49.602 Host vagrant 00:03:49.602 HostName 192.168.121.241 00:03:49.602 User vagrant 00:03:49.602 Port 22 00:03:49.602 UserKnownHostsFile /dev/null 00:03:49.602 StrictHostKeyChecking no 00:03:49.602 PasswordAuthentication no 00:03:49.602 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:49.602 IdentitiesOnly yes 00:03:49.603 LogLevel FATAL 00:03:49.603 ForwardAgent yes 00:03:49.603 ForwardX11 yes 00:03:49.603 00:03:49.619 [Pipeline] withEnv 00:03:49.622 [Pipeline] { 00:03:49.639 [Pipeline] sh 00:03:49.924 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:49.924 source /etc/os-release 00:03:49.924 [[ -e /image.version ]] && img=$(< /image.version) 00:03:49.924 # Minimal, systemd-like check. 00:03:49.924 if [[ -e /.dockerenv ]]; then 00:03:49.924 # Clear garbage from the node'\''s name: 00:03:49.924 # agt-er_autotest_547-896 -> autotest_547-896 00:03:49.924 # $HOSTNAME is the actual container id 00:03:49.924 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:49.924 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:49.924 # We can assume this is a mount from a host where container is running, 00:03:49.924 # so fetch its hostname to easily identify the target swarm worker. 00:03:49.924 container="$(< /etc/hostname) ($agent)" 00:03:49.924 else 00:03:49.924 # Fallback 00:03:49.924 container=$agent 00:03:49.924 fi 00:03:49.924 fi 00:03:49.924 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:49.924 ' 00:03:50.197 [Pipeline] } 00:03:50.212 [Pipeline] // withEnv 00:03:50.219 [Pipeline] setCustomBuildProperty 00:03:50.232 [Pipeline] stage 00:03:50.233 [Pipeline] { (Tests) 00:03:50.253 [Pipeline] sh 00:03:50.548 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:50.826 [Pipeline] sh 00:03:51.111 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:51.388 [Pipeline] timeout 00:03:51.388 Timeout set to expire in 50 min 00:03:51.390 [Pipeline] { 00:03:51.405 [Pipeline] sh 00:03:51.689 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:52.261 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:03:52.275 [Pipeline] sh 00:03:52.561 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:52.838 [Pipeline] sh 00:03:53.122 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:53.402 [Pipeline] sh 00:03:53.683 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:53.948 ++ readlink -f spdk_repo 00:03:53.948 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:53.948 + [[ -n /home/vagrant/spdk_repo ]] 00:03:53.948 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:53.948 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:53.948 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:53.948 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:53.948 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:53.948 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:53.948 + cd /home/vagrant/spdk_repo 00:03:53.948 + source /etc/os-release 00:03:53.948 ++ NAME='Fedora Linux' 00:03:53.948 ++ VERSION='39 (Cloud Edition)' 00:03:53.948 ++ ID=fedora 00:03:53.948 ++ VERSION_ID=39 00:03:53.948 ++ VERSION_CODENAME= 00:03:53.948 ++ PLATFORM_ID=platform:f39 00:03:53.948 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:53.948 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:53.948 ++ LOGO=fedora-logo-icon 00:03:53.948 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:53.948 ++ HOME_URL=https://fedoraproject.org/ 00:03:53.948 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:53.948 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:53.948 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:53.948 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:53.948 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:53.948 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:53.948 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:53.948 ++ SUPPORT_END=2024-11-12 00:03:53.948 ++ VARIANT='Cloud Edition' 00:03:53.948 ++ VARIANT_ID=cloud 00:03:53.948 + uname -a 00:03:53.948 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:53.948 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:54.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.483 Hugepages 00:03:54.483 node hugesize free / total 00:03:54.483 node0 1048576kB 0 / 0 00:03:54.483 node0 2048kB 0 / 0 00:03:54.483 00:03:54.483 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.483 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:54.483 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:54.483 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:54.483 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:54.483 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:54.744 + rm -f /tmp/spdk-ld-path 00:03:54.744 + source autorun-spdk.conf 00:03:54.744 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:54.744 ++ SPDK_TEST_NVME=1 00:03:54.744 ++ SPDK_TEST_FTL=1 00:03:54.744 ++ SPDK_TEST_ISAL=1 00:03:54.744 ++ SPDK_RUN_ASAN=1 00:03:54.744 ++ SPDK_RUN_UBSAN=1 00:03:54.744 ++ SPDK_TEST_XNVME=1 00:03:54.744 ++ SPDK_TEST_NVME_FDP=1 00:03:54.744 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:54.744 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:54.744 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:54.744 ++ RUN_NIGHTLY=1 00:03:54.744 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:54.744 + [[ -n '' ]] 00:03:54.744 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:54.744 + for M in /var/spdk/build-*-manifest.txt 00:03:54.744 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:54.744 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:54.744 + for M in /var/spdk/build-*-manifest.txt 00:03:54.744 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:54.744 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:54.744 + for M in /var/spdk/build-*-manifest.txt 00:03:54.744 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:54.745 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:54.745 ++ uname 00:03:54.745 + [[ Linux == \L\i\n\u\x ]] 00:03:54.745 + sudo dmesg -T 00:03:54.745 + sudo dmesg --clear 00:03:54.745 + dmesg_pid=5767 00:03:54.745 + [[ Fedora Linux == FreeBSD ]] 00:03:54.745 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:54.745 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:54.745 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:54.745 + [[ -x /usr/src/fio-static/fio ]] 00:03:54.745 + sudo dmesg -Tw 00:03:54.745 + export FIO_BIN=/usr/src/fio-static/fio 00:03:54.745 + FIO_BIN=/usr/src/fio-static/fio 00:03:54.745 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:54.745 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:54.745 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:54.745 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:54.745 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:54.745 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:54.745 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:54.745 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:54.745 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:54.745 Test configuration: 00:03:54.745 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:54.745 SPDK_TEST_NVME=1 00:03:54.745 SPDK_TEST_FTL=1 00:03:54.745 SPDK_TEST_ISAL=1 00:03:54.745 SPDK_RUN_ASAN=1 00:03:54.745 SPDK_RUN_UBSAN=1 00:03:54.745 SPDK_TEST_XNVME=1 00:03:54.745 SPDK_TEST_NVME_FDP=1 00:03:54.745 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:54.745 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:54.745 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:54.745 RUN_NIGHTLY=1 10:59:23 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:54.745 10:59:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:54.745 10:59:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:54.745 10:59:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:54.745 10:59:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.745 10:59:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.745 10:59:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.745 10:59:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.745 10:59:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.745 10:59:23 -- paths/export.sh@5 -- $ export PATH 00:03:54.745 10:59:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.745 10:59:23 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:54.745 10:59:23 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:54.745 10:59:23 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732705163.XXXXXX 00:03:54.745 10:59:23 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732705163.NOHi8A 00:03:54.745 10:59:23 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:54.745 10:59:23 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:03:54.745 10:59:23 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:54.745 10:59:23 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:54.745 10:59:23 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:54.745 10:59:23 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:54.745 10:59:23 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:54.745 10:59:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:54.745 10:59:23 -- common/autotest_common.sh@10 -- $ set +x 00:03:54.745 10:59:23 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:54.745 10:59:23 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:54.745 10:59:23 -- pm/common@17 -- $ local monitor 00:03:54.745 10:59:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.005 10:59:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.005 10:59:23 -- pm/common@25 -- $ sleep 1 00:03:55.005 10:59:23 -- pm/common@21 -- $ date +%s 00:03:55.005 10:59:23 -- pm/common@21 -- $ date +%s 00:03:55.005 10:59:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732705163 00:03:55.005 10:59:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732705163 00:03:55.005 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732705163_collect-cpu-load.pm.log 00:03:55.005 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732705163_collect-vmstat.pm.log 00:03:55.949 10:59:24 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:55.949 10:59:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:55.949 10:59:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:55.949 10:59:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:55.949 10:59:24 -- spdk/autobuild.sh@16 -- $ date -u 00:03:55.949 Wed Nov 27 10:59:24 AM UTC 2024 00:03:55.949 10:59:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:55.949 v24.09-rc1-9-gb18e1bd62 00:03:55.949 10:59:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:55.949 10:59:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:55.949 10:59:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:55.949 10:59:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:55.949 10:59:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.949 ************************************ 00:03:55.949 START TEST asan 00:03:55.949 ************************************ 00:03:55.949 10:59:24 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:55.949 using asan 00:03:55.949 00:03:55.949 real 0m0.001s 00:03:55.949 user 0m0.000s 00:03:55.949 sys 0m0.000s 00:03:55.949 10:59:24 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:55.949 10:59:24 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:55.949 ************************************ 00:03:55.949 END TEST asan 00:03:55.949 ************************************ 00:03:55.949 10:59:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:55.949 10:59:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:55.949 10:59:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:55.949 10:59:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:55.949 10:59:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.949 ************************************ 00:03:55.949 START TEST ubsan 00:03:55.949 ************************************ 00:03:55.949 using ubsan 00:03:55.949 10:59:24 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:55.949 00:03:55.949 real 0m0.000s 00:03:55.949 user 0m0.000s 00:03:55.949 sys 0m0.000s 00:03:55.949 10:59:24 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:55.949 ************************************ 00:03:55.949 END TEST ubsan 00:03:55.949 ************************************ 00:03:55.949 10:59:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:55.949 10:59:24 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:55.949 10:59:24 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:55.949 10:59:24 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:55.949 10:59:24 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:55.949 10:59:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:55.949 10:59:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.949 ************************************ 00:03:55.949 START TEST build_native_dpdk 00:03:55.949 ************************************ 00:03:55.949 10:59:24 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:55.949 10:59:24 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:55.950 eeb0605f11 version: 23.11.0 00:03:55.950 238778122a doc: update release notes for 23.11 00:03:55.950 46aa6b3cfc doc: fix description of RSS features 00:03:55.950 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:55.950 7e421ae345 devtools: support skipping forbid rule check 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:55.950 10:59:24 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:55.950 10:59:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:56.211 patching file config/rte_config.h 00:03:56.211 Hunk #1 succeeded at 60 (offset 1 line). 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:56.211 patching file lib/pcapng/rte_pcapng.c 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:56.211 10:59:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:56.211 10:59:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:04:01.497 The Meson build system 00:04:01.497 Version: 1.5.0 00:04:01.497 Source dir: /home/vagrant/spdk_repo/dpdk 00:04:01.497 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:04:01.497 Build type: native build 00:04:01.497 Program cat found: YES (/usr/bin/cat) 00:04:01.497 Project name: DPDK 00:04:01.497 Project version: 23.11.0 00:04:01.497 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:01.497 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:01.497 Host machine cpu family: x86_64 00:04:01.497 Host machine cpu: x86_64 00:04:01.497 Message: ## Building in Developer Mode ## 00:04:01.497 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:01.497 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:04:01.497 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:04:01.497 Program python3 found: YES (/usr/bin/python3) 00:04:01.497 Program cat found: YES (/usr/bin/cat) 00:04:01.497 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:04:01.497 Compiler for C supports arguments -march=native: YES 00:04:01.497 Checking for size of "void *" : 8 00:04:01.497 Checking for size of "void *" : 8 (cached) 00:04:01.497 Library m found: YES 00:04:01.497 Library numa found: YES 00:04:01.497 Has header "numaif.h" : YES 00:04:01.497 Library fdt found: NO 00:04:01.497 Library execinfo found: NO 00:04:01.497 Has header "execinfo.h" : YES 00:04:01.497 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:01.497 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:01.497 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:01.497 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:01.497 Run-time dependency openssl found: YES 3.1.1 00:04:01.497 Run-time dependency libpcap found: YES 1.10.4 00:04:01.497 Has header "pcap.h" with dependency libpcap: YES 00:04:01.497 Compiler for C supports arguments -Wcast-qual: YES 00:04:01.497 Compiler for C supports arguments -Wdeprecated: YES 00:04:01.497 Compiler for C supports arguments -Wformat: YES 00:04:01.497 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:01.497 Compiler for C supports arguments -Wformat-security: NO 00:04:01.497 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:01.497 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:01.497 Compiler for C supports arguments -Wnested-externs: YES 00:04:01.497 Compiler for C supports arguments -Wold-style-definition: YES 00:04:01.497 Compiler for C supports arguments -Wpointer-arith: YES 00:04:01.497 Compiler for C supports arguments -Wsign-compare: YES 00:04:01.497 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:01.497 Compiler for C supports arguments -Wundef: YES 00:04:01.497 Compiler for C supports arguments -Wwrite-strings: YES 00:04:01.497 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:01.497 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:01.497 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:01.497 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:01.497 Program objdump found: YES (/usr/bin/objdump) 00:04:01.497 Compiler for C supports arguments -mavx512f: YES 00:04:01.497 Checking if "AVX512 checking" compiles: YES 00:04:01.497 Fetching value of define "__SSE4_2__" : 1 00:04:01.497 Fetching value of define "__AES__" : 1 00:04:01.497 Fetching value of define "__AVX__" : 1 00:04:01.497 Fetching value of define "__AVX2__" : 1 00:04:01.497 Fetching value of define "__AVX512BW__" : 1 00:04:01.497 Fetching value of define "__AVX512CD__" : 1 00:04:01.497 Fetching value of define "__AVX512DQ__" : 1 00:04:01.497 Fetching value of define "__AVX512F__" : 1 00:04:01.497 Fetching value of define "__AVX512VL__" : 1 00:04:01.497 Fetching value of define "__PCLMUL__" : 1 00:04:01.497 Fetching value of define "__RDRND__" : 1 00:04:01.497 Fetching value of define "__RDSEED__" : 1 00:04:01.497 Fetching value of define "__VPCLMULQDQ__" : 1 00:04:01.497 Fetching value of define "__znver1__" : (undefined) 00:04:01.497 Fetching value of define "__znver2__" : (undefined) 00:04:01.497 Fetching value of define "__znver3__" : (undefined) 00:04:01.497 Fetching value of define "__znver4__" : (undefined) 00:04:01.497 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:01.497 Message: lib/log: Defining dependency "log" 00:04:01.497 Message: lib/kvargs: Defining dependency "kvargs" 00:04:01.497 Message: lib/telemetry: Defining dependency "telemetry" 00:04:01.497 Checking for function "getentropy" : NO 00:04:01.497 Message: lib/eal: Defining dependency "eal" 00:04:01.497 Message: lib/ring: Defining dependency "ring" 00:04:01.497 Message: lib/rcu: Defining dependency "rcu" 00:04:01.497 Message: lib/mempool: Defining dependency "mempool" 00:04:01.497 Message: lib/mbuf: Defining dependency "mbuf" 00:04:01.497 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:01.497 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:01.497 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:01.497 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:01.497 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:01.498 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:04:01.498 Compiler for C supports arguments -mpclmul: YES 00:04:01.498 Compiler for C supports arguments -maes: YES 00:04:01.498 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:01.498 Compiler for C supports arguments -mavx512bw: YES 00:04:01.498 Compiler for C supports arguments -mavx512dq: YES 00:04:01.498 Compiler for C supports arguments -mavx512vl: YES 00:04:01.498 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:01.498 Compiler for C supports arguments -mavx2: YES 00:04:01.498 Compiler for C supports arguments -mavx: YES 00:04:01.498 Message: lib/net: Defining dependency "net" 00:04:01.498 Message: lib/meter: Defining dependency "meter" 00:04:01.498 Message: lib/ethdev: Defining dependency "ethdev" 00:04:01.498 Message: lib/pci: Defining dependency "pci" 00:04:01.498 Message: lib/cmdline: Defining dependency "cmdline" 00:04:01.498 Message: lib/metrics: Defining dependency "metrics" 00:04:01.498 Message: lib/hash: Defining dependency "hash" 00:04:01.498 Message: lib/timer: Defining dependency "timer" 00:04:01.498 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512CD__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:01.498 Message: lib/acl: Defining dependency "acl" 00:04:01.498 Message: lib/bbdev: Defining dependency "bbdev" 00:04:01.498 Message: lib/bitratestats: Defining dependency "bitratestats" 00:04:01.498 Run-time dependency libelf found: YES 0.191 00:04:01.498 Message: lib/bpf: Defining dependency "bpf" 00:04:01.498 Message: lib/cfgfile: Defining dependency "cfgfile" 00:04:01.498 Message: lib/compressdev: Defining dependency "compressdev" 00:04:01.498 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:01.498 Message: lib/distributor: Defining dependency "distributor" 00:04:01.498 Message: lib/dmadev: Defining dependency "dmadev" 00:04:01.498 Message: lib/efd: Defining dependency "efd" 00:04:01.498 Message: lib/eventdev: Defining dependency "eventdev" 00:04:01.498 Message: lib/dispatcher: Defining dependency "dispatcher" 00:04:01.498 Message: lib/gpudev: Defining dependency "gpudev" 00:04:01.498 Message: lib/gro: Defining dependency "gro" 00:04:01.498 Message: lib/gso: Defining dependency "gso" 00:04:01.498 Message: lib/ip_frag: Defining dependency "ip_frag" 00:04:01.498 Message: lib/jobstats: Defining dependency "jobstats" 00:04:01.498 Message: lib/latencystats: Defining dependency "latencystats" 00:04:01.498 Message: lib/lpm: Defining dependency "lpm" 00:04:01.498 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512IFMA__" : 1 00:04:01.498 Message: lib/member: Defining dependency "member" 00:04:01.498 Message: lib/pcapng: Defining dependency "pcapng" 00:04:01.498 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:01.498 Message: lib/power: Defining dependency "power" 00:04:01.498 Message: lib/rawdev: Defining dependency "rawdev" 00:04:01.498 Message: lib/regexdev: Defining dependency "regexdev" 00:04:01.498 Message: lib/mldev: Defining dependency "mldev" 00:04:01.498 Message: lib/rib: Defining dependency "rib" 00:04:01.498 Message: lib/reorder: Defining dependency "reorder" 00:04:01.498 Message: lib/sched: Defining dependency "sched" 00:04:01.498 Message: lib/security: Defining dependency "security" 00:04:01.498 Message: lib/stack: Defining dependency "stack" 00:04:01.498 Has header "linux/userfaultfd.h" : YES 00:04:01.498 Has header "linux/vduse.h" : YES 00:04:01.498 Message: lib/vhost: Defining dependency "vhost" 00:04:01.498 Message: lib/ipsec: Defining dependency "ipsec" 00:04:01.498 Message: lib/pdcp: Defining dependency "pdcp" 00:04:01.498 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:01.498 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:01.498 Message: lib/fib: Defining dependency "fib" 00:04:01.498 Message: lib/port: Defining dependency "port" 00:04:01.498 Message: lib/pdump: Defining dependency "pdump" 00:04:01.498 Message: lib/table: Defining dependency "table" 00:04:01.498 Message: lib/pipeline: Defining dependency "pipeline" 00:04:01.498 Message: lib/graph: Defining dependency "graph" 00:04:01.498 Message: lib/node: Defining dependency "node" 00:04:01.498 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:01.498 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:01.498 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:01.498 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:02.440 Compiler for C supports arguments -Wno-sign-compare: YES 00:04:02.440 Compiler for C supports arguments -Wno-unused-value: YES 00:04:02.440 Compiler for C supports arguments -Wno-format: YES 00:04:02.440 Compiler for C supports arguments -Wno-format-security: YES 00:04:02.440 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:04:02.440 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:02.440 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:04:02.440 Compiler for C supports arguments -Wno-unused-parameter: YES 00:04:02.440 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:02.440 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:02.440 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:02.440 Compiler for C supports arguments -mavx512bw: YES (cached) 00:04:02.440 Compiler for C supports arguments -march=skylake-avx512: YES 00:04:02.440 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:04:02.440 Has header "sys/epoll.h" : YES 00:04:02.440 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:02.440 Configuring doxy-api-html.conf using configuration 00:04:02.440 Configuring doxy-api-man.conf using configuration 00:04:02.440 Program mandb found: YES (/usr/bin/mandb) 00:04:02.440 Program sphinx-build found: NO 00:04:02.440 Configuring rte_build_config.h using configuration 00:04:02.440 Message: 00:04:02.440 ================= 00:04:02.440 Applications Enabled 00:04:02.440 ================= 00:04:02.440 00:04:02.440 apps: 00:04:02.440 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:04:02.440 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:04:02.440 test-pmd, test-regex, test-sad, test-security-perf, 00:04:02.440 00:04:02.440 Message: 00:04:02.440 ================= 00:04:02.440 Libraries Enabled 00:04:02.440 ================= 00:04:02.440 00:04:02.440 libs: 00:04:02.440 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:02.440 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:04:02.441 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:04:02.441 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:04:02.441 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:04:02.441 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:04:02.441 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:04:02.441 00:04:02.441 00:04:02.441 Message: 00:04:02.441 =============== 00:04:02.441 Drivers Enabled 00:04:02.441 =============== 00:04:02.441 00:04:02.441 common: 00:04:02.441 00:04:02.441 bus: 00:04:02.441 pci, vdev, 00:04:02.441 mempool: 00:04:02.441 ring, 00:04:02.441 dma: 00:04:02.441 00:04:02.441 net: 00:04:02.441 i40e, 00:04:02.441 raw: 00:04:02.441 00:04:02.441 crypto: 00:04:02.441 00:04:02.441 compress: 00:04:02.441 00:04:02.441 regex: 00:04:02.441 00:04:02.441 ml: 00:04:02.441 00:04:02.441 vdpa: 00:04:02.441 00:04:02.441 event: 00:04:02.441 00:04:02.441 baseband: 00:04:02.441 00:04:02.441 gpu: 00:04:02.441 00:04:02.441 00:04:02.441 Message: 00:04:02.441 ================= 00:04:02.441 Content Skipped 00:04:02.441 ================= 00:04:02.441 00:04:02.441 apps: 00:04:02.441 00:04:02.441 libs: 00:04:02.441 00:04:02.441 drivers: 00:04:02.441 common/cpt: not in enabled drivers build config 00:04:02.441 common/dpaax: not in enabled drivers build config 00:04:02.441 common/iavf: not in enabled drivers build config 00:04:02.441 common/idpf: not in enabled drivers build config 00:04:02.441 common/mvep: not in enabled drivers build config 00:04:02.441 common/octeontx: not in enabled drivers build config 00:04:02.441 bus/auxiliary: not in enabled drivers build config 00:04:02.441 bus/cdx: not in enabled drivers build config 00:04:02.441 bus/dpaa: not in enabled drivers build config 00:04:02.441 bus/fslmc: not in enabled drivers build config 00:04:02.441 bus/ifpga: not in enabled drivers build config 00:04:02.441 bus/platform: not in enabled drivers build config 00:04:02.441 bus/vmbus: not in enabled drivers build config 00:04:02.441 common/cnxk: not in enabled drivers build config 00:04:02.441 common/mlx5: not in enabled drivers build config 00:04:02.441 common/nfp: not in enabled drivers build config 00:04:02.441 common/qat: not in enabled drivers build config 00:04:02.441 common/sfc_efx: not in enabled drivers build config 00:04:02.441 mempool/bucket: not in enabled drivers build config 00:04:02.441 mempool/cnxk: not in enabled drivers build config 00:04:02.441 mempool/dpaa: not in enabled drivers build config 00:04:02.441 mempool/dpaa2: not in enabled drivers build config 00:04:02.441 mempool/octeontx: not in enabled drivers build config 00:04:02.441 mempool/stack: not in enabled drivers build config 00:04:02.441 dma/cnxk: not in enabled drivers build config 00:04:02.441 dma/dpaa: not in enabled drivers build config 00:04:02.441 dma/dpaa2: not in enabled drivers build config 00:04:02.441 dma/hisilicon: not in enabled drivers build config 00:04:02.441 dma/idxd: not in enabled drivers build config 00:04:02.441 dma/ioat: not in enabled drivers build config 00:04:02.441 dma/skeleton: not in enabled drivers build config 00:04:02.441 net/af_packet: not in enabled drivers build config 00:04:02.441 net/af_xdp: not in enabled drivers build config 00:04:02.441 net/ark: not in enabled drivers build config 00:04:02.441 net/atlantic: not in enabled drivers build config 00:04:02.441 net/avp: not in enabled drivers build config 00:04:02.441 net/axgbe: not in enabled drivers build config 00:04:02.441 net/bnx2x: not in enabled drivers build config 00:04:02.441 net/bnxt: not in enabled drivers build config 00:04:02.441 net/bonding: not in enabled drivers build config 00:04:02.441 net/cnxk: not in enabled drivers build config 00:04:02.441 net/cpfl: not in enabled drivers build config 00:04:02.441 net/cxgbe: not in enabled drivers build config 00:04:02.441 net/dpaa: not in enabled drivers build config 00:04:02.441 net/dpaa2: not in enabled drivers build config 00:04:02.441 net/e1000: not in enabled drivers build config 00:04:02.441 net/ena: not in enabled drivers build config 00:04:02.441 net/enetc: not in enabled drivers build config 00:04:02.441 net/enetfec: not in enabled drivers build config 00:04:02.441 net/enic: not in enabled drivers build config 00:04:02.441 net/failsafe: not in enabled drivers build config 00:04:02.441 net/fm10k: not in enabled drivers build config 00:04:02.441 net/gve: not in enabled drivers build config 00:04:02.441 net/hinic: not in enabled drivers build config 00:04:02.441 net/hns3: not in enabled drivers build config 00:04:02.441 net/iavf: not in enabled drivers build config 00:04:02.441 net/ice: not in enabled drivers build config 00:04:02.441 net/idpf: not in enabled drivers build config 00:04:02.441 net/igc: not in enabled drivers build config 00:04:02.441 net/ionic: not in enabled drivers build config 00:04:02.441 net/ipn3ke: not in enabled drivers build config 00:04:02.441 net/ixgbe: not in enabled drivers build config 00:04:02.441 net/mana: not in enabled drivers build config 00:04:02.441 net/memif: not in enabled drivers build config 00:04:02.441 net/mlx4: not in enabled drivers build config 00:04:02.441 net/mlx5: not in enabled drivers build config 00:04:02.441 net/mvneta: not in enabled drivers build config 00:04:02.441 net/mvpp2: not in enabled drivers build config 00:04:02.441 net/netvsc: not in enabled drivers build config 00:04:02.441 net/nfb: not in enabled drivers build config 00:04:02.441 net/nfp: not in enabled drivers build config 00:04:02.441 net/ngbe: not in enabled drivers build config 00:04:02.441 net/null: not in enabled drivers build config 00:04:02.441 net/octeontx: not in enabled drivers build config 00:04:02.441 net/octeon_ep: not in enabled drivers build config 00:04:02.441 net/pcap: not in enabled drivers build config 00:04:02.441 net/pfe: not in enabled drivers build config 00:04:02.441 net/qede: not in enabled drivers build config 00:04:02.441 net/ring: not in enabled drivers build config 00:04:02.441 net/sfc: not in enabled drivers build config 00:04:02.441 net/softnic: not in enabled drivers build config 00:04:02.441 net/tap: not in enabled drivers build config 00:04:02.441 net/thunderx: not in enabled drivers build config 00:04:02.441 net/txgbe: not in enabled drivers build config 00:04:02.441 net/vdev_netvsc: not in enabled drivers build config 00:04:02.441 net/vhost: not in enabled drivers build config 00:04:02.441 net/virtio: not in enabled drivers build config 00:04:02.441 net/vmxnet3: not in enabled drivers build config 00:04:02.441 raw/cnxk_bphy: not in enabled drivers build config 00:04:02.441 raw/cnxk_gpio: not in enabled drivers build config 00:04:02.441 raw/dpaa2_cmdif: not in enabled drivers build config 00:04:02.441 raw/ifpga: not in enabled drivers build config 00:04:02.441 raw/ntb: not in enabled drivers build config 00:04:02.441 raw/skeleton: not in enabled drivers build config 00:04:02.441 crypto/armv8: not in enabled drivers build config 00:04:02.441 crypto/bcmfs: not in enabled drivers build config 00:04:02.441 crypto/caam_jr: not in enabled drivers build config 00:04:02.441 crypto/ccp: not in enabled drivers build config 00:04:02.441 crypto/cnxk: not in enabled drivers build config 00:04:02.441 crypto/dpaa_sec: not in enabled drivers build config 00:04:02.441 crypto/dpaa2_sec: not in enabled drivers build config 00:04:02.441 crypto/ipsec_mb: not in enabled drivers build config 00:04:02.441 crypto/mlx5: not in enabled drivers build config 00:04:02.441 crypto/mvsam: not in enabled drivers build config 00:04:02.441 crypto/nitrox: not in enabled drivers build config 00:04:02.441 crypto/null: not in enabled drivers build config 00:04:02.441 crypto/octeontx: not in enabled drivers build config 00:04:02.441 crypto/openssl: not in enabled drivers build config 00:04:02.441 crypto/scheduler: not in enabled drivers build config 00:04:02.441 crypto/uadk: not in enabled drivers build config 00:04:02.441 crypto/virtio: not in enabled drivers build config 00:04:02.441 compress/isal: not in enabled drivers build config 00:04:02.441 compress/mlx5: not in enabled drivers build config 00:04:02.441 compress/octeontx: not in enabled drivers build config 00:04:02.441 compress/zlib: not in enabled drivers build config 00:04:02.441 regex/mlx5: not in enabled drivers build config 00:04:02.441 regex/cn9k: not in enabled drivers build config 00:04:02.441 ml/cnxk: not in enabled drivers build config 00:04:02.441 vdpa/ifc: not in enabled drivers build config 00:04:02.441 vdpa/mlx5: not in enabled drivers build config 00:04:02.441 vdpa/nfp: not in enabled drivers build config 00:04:02.441 vdpa/sfc: not in enabled drivers build config 00:04:02.441 event/cnxk: not in enabled drivers build config 00:04:02.441 event/dlb2: not in enabled drivers build config 00:04:02.441 event/dpaa: not in enabled drivers build config 00:04:02.441 event/dpaa2: not in enabled drivers build config 00:04:02.441 event/dsw: not in enabled drivers build config 00:04:02.441 event/opdl: not in enabled drivers build config 00:04:02.441 event/skeleton: not in enabled drivers build config 00:04:02.441 event/sw: not in enabled drivers build config 00:04:02.441 event/octeontx: not in enabled drivers build config 00:04:02.441 baseband/acc: not in enabled drivers build config 00:04:02.441 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:04:02.441 baseband/fpga_lte_fec: not in enabled drivers build config 00:04:02.441 baseband/la12xx: not in enabled drivers build config 00:04:02.441 baseband/null: not in enabled drivers build config 00:04:02.441 baseband/turbo_sw: not in enabled drivers build config 00:04:02.441 gpu/cuda: not in enabled drivers build config 00:04:02.441 00:04:02.441 00:04:02.441 Build targets in project: 215 00:04:02.441 00:04:02.441 DPDK 23.11.0 00:04:02.441 00:04:02.441 User defined options 00:04:02.441 libdir : lib 00:04:02.441 prefix : /home/vagrant/spdk_repo/dpdk/build 00:04:02.441 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:04:02.441 c_link_args : 00:04:02.441 enable_docs : false 00:04:02.441 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:04:02.441 enable_kmods : false 00:04:02.441 machine : native 00:04:02.441 tests : false 00:04:02.441 00:04:02.442 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:02.442 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:04:02.442 10:59:31 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:04:02.442 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:02.442 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:02.442 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:02.442 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:02.442 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:02.442 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:02.442 [6/705] Linking static target lib/librte_kvargs.a 00:04:02.702 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:02.702 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:02.702 [9/705] Linking static target lib/librte_log.a 00:04:02.702 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:02.702 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:02.702 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.702 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:02.702 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:02.963 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:02.963 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:02.963 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.963 [18/705] Linking target lib/librte_log.so.24.0 00:04:02.963 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:03.223 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:03.223 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:03.223 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:03.223 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:03.223 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:03.223 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:03.483 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:03.483 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:03.483 [28/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:03.483 [29/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:03.483 [30/705] Linking static target lib/librte_telemetry.a 00:04:03.483 [31/705] Linking target lib/librte_kvargs.so.24.0 00:04:03.483 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:03.483 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:03.483 [34/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:03.744 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:03.744 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:03.744 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:03.744 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:03.744 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:03.744 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:03.744 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:03.744 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.744 [43/705] Linking target lib/librte_telemetry.so.24.0 00:04:04.004 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:04.005 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:04.005 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:04.005 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:04.265 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:04.265 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:04.265 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:04.265 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:04.265 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:04.265 [53/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:04.265 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:04.265 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:04.265 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:04.523 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:04.523 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:04.523 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:04.523 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:04.523 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:04.523 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:04.523 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:04.523 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:04.523 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:04.523 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:04.523 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:04.888 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:04.888 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:04.888 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:04.888 [71/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:04.888 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:04.888 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:04.888 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:04.888 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:04.888 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:04.888 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:04.888 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:05.147 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:05.147 [80/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:05.147 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:05.147 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:05.147 [83/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:05.147 [84/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:05.147 [85/705] Linking static target lib/librte_ring.a 00:04:05.405 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:05.405 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:05.405 [88/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.405 [89/705] Linking static target lib/librte_eal.a 00:04:05.405 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:05.405 [91/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:05.405 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:05.664 [93/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:05.664 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:05.664 [95/705] Linking static target lib/librte_mempool.a 00:04:05.664 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:05.664 [97/705] Linking static target lib/librte_rcu.a 00:04:05.664 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:05.922 [99/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:05.922 [100/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:05.922 [101/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:05.922 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:05.922 [103/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:05.922 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.922 [105/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.922 [106/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:05.922 [107/705] Linking static target lib/librte_mbuf.a 00:04:06.180 [108/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:04:06.180 [109/705] Linking static target lib/librte_net.a 00:04:06.180 [110/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:06.180 [111/705] Linking static target lib/librte_meter.a 00:04:06.180 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:06.180 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:06.437 [114/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.437 [115/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.437 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:06.437 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:06.437 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.696 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:06.696 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:06.953 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:06.953 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:06.953 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:06.953 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:06.953 [125/705] Linking static target lib/librte_pci.a 00:04:06.953 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:06.953 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:06.953 [128/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.238 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:07.238 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:07.238 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:07.238 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:07.238 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:07.238 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:07.238 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:07.238 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:07.238 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:07.238 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:07.238 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:07.238 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:07.238 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:07.496 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:07.496 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:07.496 [144/705] Linking static target lib/librte_cmdline.a 00:04:07.496 [145/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:07.496 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:04:07.753 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:04:07.753 [148/705] Linking static target lib/librte_metrics.a 00:04:07.753 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:08.011 [150/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:08.011 [151/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.011 [152/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:08.011 [153/705] Linking static target lib/librte_timer.a 00:04:08.011 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.268 [155/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:04:08.268 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.525 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:04:08.525 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:04:08.525 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:04:08.782 [160/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:04:08.782 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:04:08.782 [162/705] Linking static target lib/librte_bitratestats.a 00:04:08.782 [163/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:04:09.040 [164/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:04:09.040 [165/705] Linking static target lib/librte_bbdev.a 00:04:09.040 [166/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.040 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:04:09.297 [168/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:04:09.297 [169/705] Linking static target lib/acl/libavx2_tmp.a 00:04:09.297 [170/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:04:09.297 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:04:09.297 [172/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:09.297 [173/705] Linking static target lib/librte_hash.a 00:04:09.297 [174/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.554 [175/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:09.554 [176/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:04:09.554 [177/705] Linking static target lib/librte_ethdev.a 00:04:09.554 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:04:09.555 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:04:09.555 [180/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:04:09.813 [181/705] Linking static target lib/librte_cfgfile.a 00:04:09.813 [182/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.813 [183/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.813 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:04:09.813 [185/705] Linking target lib/librte_eal.so.24.0 00:04:09.813 [186/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:09.813 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:04:09.813 [188/705] Linking target lib/librte_ring.so.24.0 00:04:09.813 [189/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.813 [190/705] Linking target lib/librte_meter.so.24.0 00:04:09.813 [191/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:04:10.071 [192/705] Linking target lib/librte_pci.so.24.0 00:04:10.071 [193/705] Linking target lib/librte_timer.so.24.0 00:04:10.071 [194/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:10.071 [195/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:10.071 [196/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:10.071 [197/705] Linking target lib/librte_cfgfile.so.24.0 00:04:10.071 [198/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:10.071 [199/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:10.071 [200/705] Linking target lib/librte_rcu.so.24.0 00:04:10.071 [201/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:04:10.071 [202/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:10.071 [203/705] Linking static target lib/librte_bpf.a 00:04:10.071 [204/705] Linking target lib/librte_mempool.so.24.0 00:04:10.071 [205/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:10.071 [206/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:10.071 [207/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:10.071 [208/705] Linking static target lib/librte_compressdev.a 00:04:10.071 [209/705] Linking target lib/librte_mbuf.so.24.0 00:04:10.329 [210/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.329 [211/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:10.329 [212/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:10.329 [213/705] Linking target lib/librte_net.so.24.0 00:04:10.329 [214/705] Linking target lib/librte_bbdev.so.24.0 00:04:10.329 [215/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:10.329 [216/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:10.329 [217/705] Linking target lib/librte_cmdline.so.24.0 00:04:10.588 [218/705] Linking target lib/librte_hash.so.24.0 00:04:10.588 [219/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:04:10.588 [220/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:04:10.588 [221/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:10.588 [222/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.588 [223/705] Linking target lib/librte_compressdev.so.24.0 00:04:10.588 [224/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:04:10.588 [225/705] Linking static target lib/librte_acl.a 00:04:10.588 [226/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:10.846 [227/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:04:10.846 [228/705] Linking static target lib/librte_distributor.a 00:04:10.846 [229/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:04:10.846 [230/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.846 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:10.846 [232/705] Linking static target lib/librte_dmadev.a 00:04:10.846 [233/705] Linking target lib/librte_acl.so.24.0 00:04:10.846 [234/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.846 [235/705] Linking target lib/librte_distributor.so.24.0 00:04:11.104 [236/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:04:11.104 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:04:11.104 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.104 [239/705] Linking target lib/librte_dmadev.so.24.0 00:04:11.362 [240/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:11.362 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:04:11.362 [242/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:04:11.362 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:04:11.362 [244/705] Linking static target lib/librte_efd.a 00:04:11.620 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:04:11.620 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.620 [247/705] Linking target lib/librte_efd.so.24.0 00:04:11.620 [248/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:04:11.620 [249/705] Linking static target lib/librte_dispatcher.a 00:04:11.878 [250/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:04:11.878 [251/705] Linking static target lib/librte_gpudev.a 00:04:11.878 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:04:11.878 [253/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:04:11.878 [254/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:11.878 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:04:11.878 [256/705] Linking static target lib/librte_cryptodev.a 00:04:11.878 [257/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:04:11.878 [258/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.136 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:04:12.136 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:04:12.395 [261/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:04:12.395 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:04:12.395 [263/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.395 [264/705] Linking target lib/librte_gpudev.so.24.0 00:04:12.395 [265/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:04:12.395 [266/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:04:12.395 [267/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:04:12.395 [268/705] Linking static target lib/librte_gro.a 00:04:12.662 [269/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:04:12.662 [270/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.662 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:04:12.662 [272/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.662 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:04:12.662 [274/705] Linking target lib/librte_ethdev.so.24.0 00:04:12.662 [275/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:04:12.662 [276/705] Linking static target lib/librte_gso.a 00:04:12.662 [277/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:12.662 [278/705] Linking target lib/librte_metrics.so.24.0 00:04:12.945 [279/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.945 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:04:12.945 [281/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.945 [282/705] Linking target lib/librte_bpf.so.24.0 00:04:12.945 [283/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:04:12.945 [284/705] Linking target lib/librte_gro.so.24.0 00:04:12.945 [285/705] Linking target lib/librte_cryptodev.so.24.0 00:04:12.945 [286/705] Linking target lib/librte_bitratestats.so.24.0 00:04:12.945 [287/705] Linking target lib/librte_gso.so.24.0 00:04:12.945 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:04:12.945 [289/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:04:12.945 [290/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:04:12.945 [291/705] Linking static target lib/librte_eventdev.a 00:04:12.945 [292/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:04:12.945 [293/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:12.945 [294/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:04:12.945 [295/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:04:12.945 [296/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:04:12.945 [297/705] Linking static target lib/librte_ip_frag.a 00:04:13.203 [298/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:04:13.203 [299/705] Linking static target lib/librte_jobstats.a 00:04:13.203 [300/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:04:13.203 [301/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.203 [302/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:04:13.203 [303/705] Linking static target lib/librte_latencystats.a 00:04:13.203 [304/705] Linking target lib/librte_ip_frag.so.24.0 00:04:13.203 [305/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:04:13.460 [306/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.461 [307/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:04:13.461 [308/705] Linking target lib/librte_jobstats.so.24.0 00:04:13.461 [309/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.461 [310/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:13.461 [311/705] Linking target lib/librte_latencystats.so.24.0 00:04:13.461 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:13.461 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:04:13.718 [314/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:04:13.718 [315/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:13.718 [316/705] Linking static target lib/librte_lpm.a 00:04:13.718 [317/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:13.718 [318/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:04:13.718 [319/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:13.718 [320/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.718 [321/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:04:13.718 [322/705] Linking static target lib/librte_pcapng.a 00:04:13.977 [323/705] Linking target lib/librte_lpm.so.24.0 00:04:13.977 [324/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:04:13.977 [325/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:13.977 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:04:13.977 [327/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:13.977 [328/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:13.977 [329/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.977 [330/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:13.977 [331/705] Linking target lib/librte_pcapng.so.24.0 00:04:14.235 [332/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:04:14.235 [333/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:14.235 [334/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:04:14.235 [335/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:04:14.235 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:14.235 [337/705] Linking static target lib/librte_power.a 00:04:14.235 [338/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.492 [339/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:04:14.492 [340/705] Linking static target lib/librte_member.a 00:04:14.492 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:04:14.492 [342/705] Linking target lib/librte_eventdev.so.24.0 00:04:14.492 [343/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:04:14.492 [344/705] Linking static target lib/librte_regexdev.a 00:04:14.492 [345/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:04:14.492 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:04:14.492 [347/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:04:14.492 [348/705] Linking static target lib/librte_mldev.a 00:04:14.492 [349/705] Linking target lib/librte_dispatcher.so.24.0 00:04:14.492 [350/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:04:14.492 [351/705] Linking static target lib/librte_rawdev.a 00:04:14.493 [352/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.750 [353/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:04:14.750 [354/705] Linking target lib/librte_member.so.24.0 00:04:14.750 [355/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.750 [356/705] Linking target lib/librte_power.so.24.0 00:04:14.750 [357/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:04:14.750 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:04:14.750 [359/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:14.750 [360/705] Linking static target lib/librte_reorder.a 00:04:14.750 [361/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.008 [362/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:04:15.008 [363/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:04:15.008 [364/705] Linking static target lib/librte_rib.a 00:04:15.008 [365/705] Linking target lib/librte_rawdev.so.24.0 00:04:15.008 [366/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.008 [367/705] Linking target lib/librte_regexdev.so.24.0 00:04:15.008 [368/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:15.008 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:04:15.008 [370/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.008 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:04:15.008 [372/705] Linking target lib/librte_reorder.so.24.0 00:04:15.008 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:04:15.008 [374/705] Linking static target lib/librte_stack.a 00:04:15.265 [375/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.265 [376/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:04:15.265 [377/705] Linking target lib/librte_rib.so.24.0 00:04:15.265 [378/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.265 [379/705] Linking target lib/librte_stack.so.24.0 00:04:15.265 [380/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:04:15.265 [381/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:15.265 [382/705] Linking static target lib/librte_security.a 00:04:15.265 [383/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.265 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:15.265 [385/705] Linking target lib/librte_mldev.so.24.0 00:04:15.522 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:15.522 [387/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:15.522 [388/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.522 [389/705] Linking target lib/librte_security.so.24.0 00:04:15.522 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:04:15.522 [391/705] Linking static target lib/librte_sched.a 00:04:15.779 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:04:15.779 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:15.779 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:16.036 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.036 [396/705] Linking target lib/librte_sched.so.24.0 00:04:16.036 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:16.036 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:04:16.037 [399/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:04:16.294 [400/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:16.294 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:04:16.294 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:04:16.294 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:04:16.552 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:04:16.552 [405/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:04:16.552 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:04:16.552 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:04:16.552 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:04:16.552 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:04:16.810 [410/705] Linking static target lib/librte_ipsec.a 00:04:16.810 [411/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:04:16.810 [412/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:04:16.810 [413/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:04:17.068 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.068 [415/705] Linking target lib/librte_ipsec.so.24.0 00:04:17.068 [416/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:04:17.068 [417/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:04:17.068 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:04:17.324 [419/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:04:17.324 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:04:17.324 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:04:17.324 [422/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:04:17.324 [423/705] Linking static target lib/librte_fib.a 00:04:17.581 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:04:17.581 [425/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:04:17.581 [426/705] Linking static target lib/librte_pdcp.a 00:04:17.582 [427/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.582 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:04:17.582 [429/705] Linking target lib/librte_fib.so.24.0 00:04:17.838 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:04:17.838 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.838 [432/705] Linking target lib/librte_pdcp.so.24.0 00:04:17.838 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:04:17.838 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:04:18.095 [435/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:04:18.095 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:04:18.095 [437/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:04:18.095 [438/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:04:18.352 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:04:18.352 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:04:18.352 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:04:18.352 [442/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:04:18.352 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:04:18.610 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:04:18.610 [445/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:04:18.610 [446/705] Linking static target lib/librte_port.a 00:04:18.610 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:04:18.610 [448/705] Linking static target lib/librte_pdump.a 00:04:18.610 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:04:18.868 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:04:18.868 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:04:18.868 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.868 [453/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.868 [454/705] Linking target lib/librte_pdump.so.24.0 00:04:18.868 [455/705] Linking target lib/librte_port.so.24.0 00:04:18.868 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:04:19.124 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:04:19.124 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:04:19.124 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:04:19.124 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:04:19.124 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:04:19.124 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:04:19.382 [463/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:19.382 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:04:19.382 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:04:19.382 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:04:19.639 [467/705] Linking static target lib/librte_table.a 00:04:19.639 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:04:19.896 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:04:19.896 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.896 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:04:19.896 [472/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:04:19.896 [473/705] Linking target lib/librte_table.so.24.0 00:04:19.896 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:04:20.153 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:04:20.153 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:04:20.153 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:04:20.410 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:04:20.410 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:04:20.410 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:04:20.410 [481/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:04:20.667 [482/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:04:20.667 [483/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:04:20.667 [484/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:04:20.667 [485/705] Linking static target lib/librte_graph.a 00:04:20.667 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:04:20.667 [487/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:04:20.667 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:04:20.924 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.924 [490/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:04:20.924 [491/705] Linking target lib/librte_graph.so.24.0 00:04:21.182 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:04:21.182 [493/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:04:21.182 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:04:21.182 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:04:21.443 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:04:21.443 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:04:21.443 [498/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:04:21.443 [499/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:04:21.443 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:21.443 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:04:21.443 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:04:21.707 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:21.707 [504/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:04:21.707 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:21.707 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:21.707 [507/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:04:21.707 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:21.707 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:21.707 [510/705] Linking static target lib/librte_node.a 00:04:21.965 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.965 [512/705] Linking target lib/librte_node.so.24.0 00:04:21.965 [513/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:21.965 [514/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:21.965 [515/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:21.965 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:22.222 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:22.222 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:22.222 [519/705] Linking static target drivers/librte_bus_vdev.a 00:04:22.222 [520/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:22.222 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:22.222 [522/705] Linking static target drivers/librte_bus_pci.a 00:04:22.222 [523/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:04:22.223 [524/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:22.223 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:22.223 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:04:22.480 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:04:22.480 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.480 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:04:22.480 [530/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:04:22.480 [531/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.480 [532/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:04:22.480 [533/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:22.738 [534/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:22.738 [535/705] Linking target drivers/librte_bus_pci.so.24.0 00:04:22.738 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:04:22.738 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:22.738 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:22.738 [539/705] Linking static target drivers/librte_mempool_ring.a 00:04:22.738 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:22.738 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:04:22.995 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:04:22.995 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:04:23.252 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:04:23.252 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:04:23.819 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:04:23.819 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:04:23.819 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:04:23.819 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:04:23.819 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:04:23.819 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:04:24.076 [552/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:04:24.076 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:04:24.076 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:04:24.334 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:04:24.334 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:04:24.334 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:04:24.591 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:04:24.591 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:04:24.591 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:04:24.591 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:04:24.849 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:04:24.849 [563/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:04:25.108 [564/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:04:25.108 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:04:25.108 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:04:25.108 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:04:25.367 [568/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:04:25.367 [569/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:04:25.367 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:04:25.367 [571/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:04:25.367 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:04:25.367 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:04:25.627 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:04:25.627 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:04:25.627 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:04:25.627 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:04:25.885 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:04:25.885 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:04:25.885 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:04:25.885 [581/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:04:26.142 [582/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:04:26.142 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:04:26.142 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:26.142 [585/705] Linking static target drivers/librte_net_i40e.a 00:04:26.142 [586/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:04:26.142 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:04:26.142 [588/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:26.400 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:04:26.400 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:04:26.658 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:04:26.658 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:04:26.658 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:04:26.658 [594/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.658 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:04:26.658 [596/705] Linking target drivers/librte_net_i40e.so.24.0 00:04:26.915 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:04:26.915 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:04:27.173 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:04:27.173 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:04:27.173 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:04:27.173 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:04:27.431 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:04:27.431 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:27.431 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:04:27.431 [606/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:04:27.431 [607/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:04:27.431 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:04:27.431 [609/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:27.431 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:04:27.431 [611/705] Linking static target lib/librte_vhost.a 00:04:27.690 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:04:27.690 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:04:27.948 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:04:27.948 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:04:27.948 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:04:28.206 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.206 [618/705] Linking target lib/librte_vhost.so.24.0 00:04:28.464 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:04:28.464 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:28.464 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:04:28.464 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:28.722 [623/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:04:28.722 [624/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:04:28.722 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:04:28.722 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:04:28.722 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:04:28.722 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:04:28.980 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:04:28.980 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:04:28.980 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:04:28.980 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:04:29.239 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:04:29.239 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:04:29.239 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:04:29.239 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:04:29.239 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:04:29.239 [638/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:29.497 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:04:29.497 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:04:29.497 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:04:29.497 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:04:29.497 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:29.757 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:29.757 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:29.757 [646/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:29.757 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:29.757 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:29.757 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:30.014 [650/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:30.014 [651/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:30.014 [652/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:30.271 [653/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:30.271 [654/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:30.271 [655/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:30.271 [656/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:30.527 [657/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:30.527 [658/705] Linking static target lib/librte_pipeline.a 00:04:30.527 [659/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:30.785 [660/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:30.785 [661/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:30.785 [662/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:30.785 [663/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:30.785 [664/705] Linking target app/dpdk-dumpcap 00:04:31.043 [665/705] Linking target app/dpdk-graph 00:04:31.043 [666/705] Linking target app/dpdk-pdump 00:04:31.043 [667/705] Linking target app/dpdk-proc-info 00:04:31.043 [668/705] Linking target app/dpdk-test-acl 00:04:31.043 [669/705] Linking target app/dpdk-test-bbdev 00:04:31.300 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:31.300 [671/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:31.300 [672/705] Linking target app/dpdk-test-cmdline 00:04:31.300 [673/705] Linking target app/dpdk-test-compress-perf 00:04:31.300 [674/705] Linking target app/dpdk-test-dma-perf 00:04:31.300 [675/705] Linking target app/dpdk-test-crypto-perf 00:04:31.558 [676/705] Linking target app/dpdk-test-eventdev 00:04:31.558 [677/705] Linking target app/dpdk-test-fib 00:04:31.558 [678/705] Linking target app/dpdk-test-flow-perf 00:04:31.558 [679/705] Linking target app/dpdk-test-gpudev 00:04:31.558 [680/705] Linking target app/dpdk-test-mldev 00:04:31.815 [681/705] Linking target app/dpdk-test-pipeline 00:04:31.815 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:31.815 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:32.072 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:32.072 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:32.072 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:32.072 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:32.329 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:32.329 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:32.329 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:32.329 [691/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.329 [692/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:32.587 [693/705] Linking target lib/librte_pipeline.so.24.0 00:04:32.587 [694/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:32.587 [695/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:32.587 [696/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:32.587 [697/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:32.844 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:32.844 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:33.101 [700/705] Linking target app/dpdk-test-regex 00:04:33.101 [701/705] Linking target app/dpdk-test-sad 00:04:33.101 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:33.101 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:33.359 [704/705] Linking target app/dpdk-test-security-perf 00:04:33.617 [705/705] Linking target app/dpdk-testpmd 00:04:33.617 11:00:02 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:04:33.617 11:00:02 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:33.617 11:00:02 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:33.617 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:33.617 [0/1] Installing files. 00:04:33.878 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:33.878 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:33.880 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:33.881 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:33.882 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:33.883 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:33.883 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.883 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:33.884 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:34.148 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:34.148 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:34.148 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.148 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:34.148 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.148 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.148 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.148 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.149 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.150 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.151 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.152 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:34.153 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:34.153 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:34.153 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:34.153 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:34.153 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:34.153 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:34.153 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:34.153 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:34.153 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:34.153 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:34.153 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:34.153 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:34.153 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:34.153 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:34.153 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:34.153 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:34.153 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:34.153 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:34.153 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:34.153 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:34.153 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:34.153 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:34.153 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:34.153 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:34.153 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:34.153 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:34.153 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:34.153 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:34.153 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:34.153 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:34.153 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:34.153 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:34.153 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:34.153 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:34.153 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:34.153 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:34.153 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:34.153 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:34.153 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:34.153 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:34.153 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:34.153 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:34.153 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:34.153 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:34.153 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:34.153 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:34.153 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:34.153 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:34.154 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:34.154 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:34.154 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:34.154 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:34.154 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:34.154 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:34.154 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:34.154 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:34.154 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:34.154 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:34.154 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:34.154 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:34.154 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:34.154 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:34.154 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:34.154 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:34.154 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:34.154 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:34.154 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:34.154 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:34.154 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:34.154 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:34.154 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:34.154 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:34.154 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:34.154 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:34.154 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:34.154 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:34.154 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:34.154 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:34.154 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:34.154 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:34.154 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:34.154 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:34.154 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:34.154 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:34.154 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:34.154 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:34.154 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:34.154 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:34.154 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:34.154 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:34.154 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:34.154 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:34.154 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:34.154 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:34.154 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:34.154 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:34.154 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:34.154 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:34.154 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:34.154 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:34.154 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:34.154 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:34.154 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:34.154 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:34.154 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:34.154 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:34.154 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:34.154 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:34.154 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:34.154 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:34.154 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:34.154 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:34.154 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:34.154 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:34.154 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:34.154 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:34.154 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:34.154 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:34.154 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:34.154 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:34.154 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:34.154 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:34.154 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:34.154 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:34.155 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:34.155 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:34.155 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:34.155 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:34.155 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:34.155 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:34.155 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:34.155 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:34.155 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:34.155 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:34.155 11:00:03 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:04:34.155 ************************************ 00:04:34.155 END TEST build_native_dpdk 00:04:34.155 ************************************ 00:04:34.155 11:00:03 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:34.155 00:04:34.155 real 0m38.209s 00:04:34.155 user 4m23.332s 00:04:34.155 sys 0m39.164s 00:04:34.155 11:00:03 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:34.155 11:00:03 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:34.413 11:00:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:34.413 11:00:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:34.413 11:00:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:34.413 11:00:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:34.413 11:00:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:34.413 11:00:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:34.413 11:00:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:34.413 11:00:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:04:34.413 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:34.413 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:34.413 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:34.413 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:34.670 Using 'verbs' RDMA provider 00:04:46.009 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:55.995 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:56.515 Creating mk/config.mk...done. 00:04:56.515 Creating mk/cc.flags.mk...done. 00:04:56.515 Type 'make' to build. 00:04:56.515 11:00:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:56.515 11:00:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:56.515 11:00:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:56.515 11:00:25 -- common/autotest_common.sh@10 -- $ set +x 00:04:56.515 ************************************ 00:04:56.515 START TEST make 00:04:56.515 ************************************ 00:04:56.515 11:00:25 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:56.776 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:56.776 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:56.776 meson setup builddir \ 00:04:56.776 -Dwith-libaio=enabled \ 00:04:56.776 -Dwith-liburing=enabled \ 00:04:56.776 -Dwith-libvfn=disabled \ 00:04:56.776 -Dwith-spdk=false && \ 00:04:56.776 meson compile -C builddir && \ 00:04:56.776 cd -) 00:04:56.776 make[1]: Nothing to be done for 'all'. 00:04:59.327 The Meson build system 00:04:59.327 Version: 1.5.0 00:04:59.327 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:59.327 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:59.327 Build type: native build 00:04:59.327 Project name: xnvme 00:04:59.327 Project version: 0.7.3 00:04:59.327 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:59.327 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:59.327 Host machine cpu family: x86_64 00:04:59.327 Host machine cpu: x86_64 00:04:59.327 Message: host_machine.system: linux 00:04:59.327 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:59.327 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:59.327 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:59.327 Run-time dependency threads found: YES 00:04:59.327 Has header "setupapi.h" : NO 00:04:59.327 Has header "linux/blkzoned.h" : YES 00:04:59.327 Has header "linux/blkzoned.h" : YES (cached) 00:04:59.327 Has header "libaio.h" : YES 00:04:59.327 Library aio found: YES 00:04:59.327 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:59.327 Run-time dependency liburing found: YES 2.2 00:04:59.327 Dependency libvfn skipped: feature with-libvfn disabled 00:04:59.327 Run-time dependency appleframeworks found: NO (tried framework) 00:04:59.327 Run-time dependency appleframeworks found: NO (tried framework) 00:04:59.327 Configuring xnvme_config.h using configuration 00:04:59.327 Configuring xnvme.spec using configuration 00:04:59.327 Run-time dependency bash-completion found: YES 2.11 00:04:59.327 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:59.327 Program cp found: YES (/usr/bin/cp) 00:04:59.327 Has header "winsock2.h" : NO 00:04:59.327 Has header "dbghelp.h" : NO 00:04:59.327 Library rpcrt4 found: NO 00:04:59.327 Library rt found: YES 00:04:59.327 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:59.327 Found CMake: /usr/bin/cmake (3.27.7) 00:04:59.327 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:59.327 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:59.327 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:59.327 Build targets in project: 32 00:04:59.327 00:04:59.327 xnvme 0.7.3 00:04:59.327 00:04:59.327 User defined options 00:04:59.327 with-libaio : enabled 00:04:59.327 with-liburing: enabled 00:04:59.327 with-libvfn : disabled 00:04:59.327 with-spdk : false 00:04:59.327 00:04:59.327 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:59.906 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:59.906 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:59.906 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:59.906 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:59.906 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:59.906 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:59.906 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:59.906 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:59.906 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:59.906 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:59.906 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:59.906 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:59.906 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:59.906 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:59.906 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:59.906 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:59.906 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:59.906 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:59.906 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:59.906 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:59.906 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:59.906 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:05:00.174 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:05:00.174 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:05:00.174 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:05:00.174 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:05:00.174 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:05:00.174 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:05:00.174 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:05:00.174 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:05:00.174 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:05:00.174 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:05:00.174 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:05:00.174 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:05:00.174 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:05:00.174 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:05:00.174 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:05:00.174 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:05:00.174 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:05:00.174 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:05:00.174 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:05:00.174 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:05:00.174 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:05:00.174 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:05:00.174 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:05:00.174 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:05:00.174 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:05:00.174 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:05:00.174 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:05:00.174 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:05:00.174 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:05:00.174 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:05:00.174 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:05:00.174 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:05:00.174 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:05:00.174 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:05:00.174 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:05:00.174 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:05:00.174 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:05:00.174 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:05:00.174 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:05:00.435 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:05:00.435 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:05:00.435 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:05:00.435 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:05:00.435 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:05:00.435 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:05:00.435 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:05:00.436 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:05:00.436 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:05:00.436 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:05:00.436 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:05:00.436 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:05:00.436 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:05:00.436 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:05:00.436 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:05:00.436 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:05:00.436 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:05:00.436 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:05:00.436 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:05:00.436 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:05:00.436 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:05:00.695 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:05:00.695 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:05:00.695 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:05:00.695 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:05:00.695 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:05:00.695 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:05:00.695 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:05:00.695 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:05:00.695 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:05:00.695 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:05:00.695 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:05:00.695 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:05:00.695 [94/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:05:00.695 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:05:00.695 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:05:00.695 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:05:00.695 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:05:00.695 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:05:00.695 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:05:00.695 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:05:00.695 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:05:00.695 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:05:00.695 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:05:00.695 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:05:00.695 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:05:00.695 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:05:00.695 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:05:00.695 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:05:00.695 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:05:00.695 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:05:00.695 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:05:00.695 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:05:00.953 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:05:00.953 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:05:00.953 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:05:00.953 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:05:00.953 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:05:00.953 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:05:00.953 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:05:00.953 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:05:00.953 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:05:00.953 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:05:00.953 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:05:00.953 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:05:00.953 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:05:00.953 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:05:00.953 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:05:00.953 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:05:00.953 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:05:00.953 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:05:00.953 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:05:00.953 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:05:00.953 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:05:00.953 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:05:00.953 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:05:00.953 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:05:00.953 [138/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:05:00.953 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:05:00.954 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:05:01.211 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:05:01.211 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:05:01.211 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:05:01.212 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:05:01.212 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:05:01.212 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:05:01.212 [147/203] Linking target lib/libxnvme.so 00:05:01.212 [148/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:05:01.212 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:05:01.212 [150/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:05:01.212 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:05:01.212 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:05:01.212 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:05:01.212 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:05:01.212 [155/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:05:01.212 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:05:01.212 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:05:01.470 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:05:01.470 [159/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:05:01.470 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:05:01.470 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:05:01.470 [162/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:05:01.470 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:05:01.470 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:05:01.470 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:05:01.470 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:05:01.470 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:05:01.470 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:05:01.470 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:05:01.470 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:05:01.470 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:05:01.470 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:05:01.728 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:05:01.728 [174/203] Linking static target lib/libxnvme.a 00:05:01.728 [175/203] Linking target tests/xnvme_tests_scc 00:05:01.728 [176/203] Linking target tests/xnvme_tests_enum 00:05:01.728 [177/203] Linking target tests/xnvme_tests_buf 00:05:01.728 [178/203] Linking target tests/xnvme_tests_lblk 00:05:01.728 [179/203] Linking target tests/xnvme_tests_async_intf 00:05:01.728 [180/203] Linking target tests/xnvme_tests_cli 00:05:01.728 [181/203] Linking target tests/xnvme_tests_ioworker 00:05:01.728 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:05:01.728 [183/203] Linking target tests/xnvme_tests_znd_append 00:05:01.728 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:05:01.728 [185/203] Linking target tests/xnvme_tests_znd_state 00:05:01.728 [186/203] Linking target tests/xnvme_tests_kvs 00:05:01.728 [187/203] Linking target tests/xnvme_tests_xnvme_cli 00:05:01.728 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:05:01.728 [189/203] Linking target tests/xnvme_tests_map 00:05:01.728 [190/203] Linking target tools/xnvme 00:05:01.728 [191/203] Linking target tools/lblk 00:05:01.728 [192/203] Linking target tools/xdd 00:05:01.728 [193/203] Linking target tools/xnvme_file 00:05:01.728 [194/203] Linking target tools/zoned 00:05:01.728 [195/203] Linking target examples/xnvme_io_async 00:05:01.728 [196/203] Linking target examples/xnvme_dev 00:05:01.728 [197/203] Linking target examples/xnvme_hello 00:05:01.728 [198/203] Linking target examples/xnvme_single_sync 00:05:01.728 [199/203] Linking target examples/zoned_io_async 00:05:01.728 [200/203] Linking target examples/xnvme_enum 00:05:01.728 [201/203] Linking target tools/kvs 00:05:01.728 [202/203] Linking target examples/xnvme_single_async 00:05:01.728 [203/203] Linking target examples/zoned_io_sync 00:05:01.728 INFO: autodetecting backend as ninja 00:05:01.728 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:01.728 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:33.798 CC lib/ut/ut.o 00:05:33.798 CC lib/log/log_flags.o 00:05:33.798 CC lib/log/log.o 00:05:33.798 CC lib/log/log_deprecated.o 00:05:33.798 CC lib/ut_mock/mock.o 00:05:33.798 LIB libspdk_ut_mock.a 00:05:33.798 LIB libspdk_log.a 00:05:33.798 LIB libspdk_ut.a 00:05:33.798 SO libspdk_ut_mock.so.6.0 00:05:33.798 SO libspdk_ut.so.2.0 00:05:33.798 SO libspdk_log.so.7.0 00:05:33.798 SYMLINK libspdk_ut.so 00:05:33.798 SYMLINK libspdk_ut_mock.so 00:05:33.798 SYMLINK libspdk_log.so 00:05:33.798 CXX lib/trace_parser/trace.o 00:05:33.798 CC lib/dma/dma.o 00:05:33.798 CC lib/ioat/ioat.o 00:05:33.798 CC lib/util/base64.o 00:05:33.798 CC lib/util/crc16.o 00:05:33.798 CC lib/util/cpuset.o 00:05:33.798 CC lib/util/crc32.o 00:05:33.798 CC lib/util/bit_array.o 00:05:33.798 CC lib/util/crc32c.o 00:05:33.798 CC lib/vfio_user/host/vfio_user_pci.o 00:05:33.798 CC lib/util/crc32_ieee.o 00:05:33.798 CC lib/util/crc64.o 00:05:33.798 CC lib/util/dif.o 00:05:33.798 CC lib/util/fd.o 00:05:33.798 CC lib/vfio_user/host/vfio_user.o 00:05:33.798 LIB libspdk_dma.a 00:05:33.798 SO libspdk_dma.so.5.0 00:05:33.798 CC lib/util/fd_group.o 00:05:33.798 CC lib/util/file.o 00:05:33.798 SYMLINK libspdk_dma.so 00:05:33.798 CC lib/util/hexlify.o 00:05:33.798 CC lib/util/iov.o 00:05:33.798 CC lib/util/math.o 00:05:33.798 LIB libspdk_ioat.a 00:05:33.798 CC lib/util/net.o 00:05:33.798 SO libspdk_ioat.so.7.0 00:05:33.798 CC lib/util/pipe.o 00:05:33.798 SYMLINK libspdk_ioat.so 00:05:33.798 CC lib/util/strerror_tls.o 00:05:33.798 LIB libspdk_vfio_user.a 00:05:33.798 CC lib/util/string.o 00:05:33.798 CC lib/util/uuid.o 00:05:33.798 CC lib/util/xor.o 00:05:33.798 SO libspdk_vfio_user.so.5.0 00:05:33.798 CC lib/util/zipf.o 00:05:33.798 SYMLINK libspdk_vfio_user.so 00:05:33.798 CC lib/util/md5.o 00:05:33.799 LIB libspdk_util.a 00:05:33.799 SO libspdk_util.so.10.0 00:05:33.799 LIB libspdk_trace_parser.a 00:05:33.799 SO libspdk_trace_parser.so.6.0 00:05:33.799 SYMLINK libspdk_util.so 00:05:33.799 SYMLINK libspdk_trace_parser.so 00:05:33.799 CC lib/json/json_parse.o 00:05:33.799 CC lib/json/json_util.o 00:05:33.799 CC lib/json/json_write.o 00:05:33.799 CC lib/env_dpdk/env.o 00:05:33.799 CC lib/env_dpdk/memory.o 00:05:33.799 CC lib/vmd/vmd.o 00:05:33.799 CC lib/rdma_provider/common.o 00:05:33.799 CC lib/idxd/idxd.o 00:05:33.799 CC lib/rdma_utils/rdma_utils.o 00:05:33.799 CC lib/conf/conf.o 00:05:33.799 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:33.799 CC lib/env_dpdk/pci.o 00:05:33.799 LIB libspdk_conf.a 00:05:33.799 CC lib/env_dpdk/init.o 00:05:33.799 SO libspdk_conf.so.6.0 00:05:33.799 LIB libspdk_rdma_utils.a 00:05:33.799 LIB libspdk_json.a 00:05:33.799 SYMLINK libspdk_conf.so 00:05:33.799 CC lib/env_dpdk/threads.o 00:05:33.799 SO libspdk_rdma_utils.so.1.0 00:05:33.799 SO libspdk_json.so.6.0 00:05:33.799 LIB libspdk_rdma_provider.a 00:05:33.799 SYMLINK libspdk_rdma_utils.so 00:05:33.799 CC lib/env_dpdk/pci_ioat.o 00:05:33.799 SO libspdk_rdma_provider.so.6.0 00:05:33.799 SYMLINK libspdk_json.so 00:05:33.799 CC lib/env_dpdk/pci_virtio.o 00:05:33.799 SYMLINK libspdk_rdma_provider.so 00:05:33.799 CC lib/env_dpdk/pci_vmd.o 00:05:33.799 CC lib/env_dpdk/pci_idxd.o 00:05:33.799 CC lib/vmd/led.o 00:05:33.799 CC lib/env_dpdk/pci_event.o 00:05:33.799 CC lib/env_dpdk/sigbus_handler.o 00:05:33.799 CC lib/idxd/idxd_user.o 00:05:33.799 CC lib/jsonrpc/jsonrpc_server.o 00:05:33.799 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:33.799 CC lib/jsonrpc/jsonrpc_client.o 00:05:33.799 CC lib/env_dpdk/pci_dpdk.o 00:05:33.799 CC lib/idxd/idxd_kernel.o 00:05:33.799 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:33.799 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:33.799 LIB libspdk_vmd.a 00:05:33.799 SO libspdk_vmd.so.6.0 00:05:33.799 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:33.799 LIB libspdk_idxd.a 00:05:33.799 SYMLINK libspdk_vmd.so 00:05:33.799 SO libspdk_idxd.so.12.1 00:05:33.799 LIB libspdk_jsonrpc.a 00:05:33.799 SYMLINK libspdk_idxd.so 00:05:33.799 SO libspdk_jsonrpc.so.6.0 00:05:33.799 SYMLINK libspdk_jsonrpc.so 00:05:33.799 LIB libspdk_env_dpdk.a 00:05:33.799 CC lib/rpc/rpc.o 00:05:33.799 SO libspdk_env_dpdk.so.15.0 00:05:33.799 SYMLINK libspdk_env_dpdk.so 00:05:33.799 LIB libspdk_rpc.a 00:05:33.799 SO libspdk_rpc.so.6.0 00:05:33.799 SYMLINK libspdk_rpc.so 00:05:33.799 CC lib/trace/trace_rpc.o 00:05:33.799 CC lib/trace/trace.o 00:05:33.799 CC lib/trace/trace_flags.o 00:05:33.799 CC lib/keyring/keyring.o 00:05:33.799 CC lib/keyring/keyring_rpc.o 00:05:33.799 CC lib/notify/notify_rpc.o 00:05:33.799 CC lib/notify/notify.o 00:05:33.799 LIB libspdk_notify.a 00:05:33.799 SO libspdk_notify.so.6.0 00:05:33.799 LIB libspdk_keyring.a 00:05:33.799 SYMLINK libspdk_notify.so 00:05:33.799 LIB libspdk_trace.a 00:05:33.799 SO libspdk_keyring.so.2.0 00:05:33.799 SO libspdk_trace.so.11.0 00:05:33.799 SYMLINK libspdk_trace.so 00:05:33.799 SYMLINK libspdk_keyring.so 00:05:34.057 CC lib/sock/sock.o 00:05:34.057 CC lib/sock/sock_rpc.o 00:05:34.057 CC lib/thread/thread.o 00:05:34.057 CC lib/thread/iobuf.o 00:05:34.316 LIB libspdk_sock.a 00:05:34.316 SO libspdk_sock.so.10.0 00:05:34.316 SYMLINK libspdk_sock.so 00:05:34.574 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:34.574 CC lib/nvme/nvme_ctrlr.o 00:05:34.574 CC lib/nvme/nvme_fabric.o 00:05:34.574 CC lib/nvme/nvme_pcie.o 00:05:34.574 CC lib/nvme/nvme.o 00:05:34.574 CC lib/nvme/nvme_ns.o 00:05:34.574 CC lib/nvme/nvme_ns_cmd.o 00:05:34.574 CC lib/nvme/nvme_pcie_common.o 00:05:34.574 CC lib/nvme/nvme_qpair.o 00:05:35.141 CC lib/nvme/nvme_quirks.o 00:05:35.141 LIB libspdk_thread.a 00:05:35.141 SO libspdk_thread.so.10.1 00:05:35.400 CC lib/nvme/nvme_transport.o 00:05:35.400 CC lib/nvme/nvme_discovery.o 00:05:35.400 SYMLINK libspdk_thread.so 00:05:35.400 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:35.400 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:35.400 CC lib/nvme/nvme_tcp.o 00:05:35.400 CC lib/accel/accel.o 00:05:35.400 CC lib/nvme/nvme_opal.o 00:05:35.400 CC lib/nvme/nvme_io_msg.o 00:05:35.658 CC lib/nvme/nvme_poll_group.o 00:05:35.658 CC lib/nvme/nvme_zns.o 00:05:35.658 CC lib/nvme/nvme_stubs.o 00:05:35.916 CC lib/nvme/nvme_auth.o 00:05:35.916 CC lib/nvme/nvme_cuse.o 00:05:35.916 CC lib/nvme/nvme_rdma.o 00:05:36.176 CC lib/blob/blobstore.o 00:05:36.176 CC lib/blob/request.o 00:05:36.176 CC lib/blob/zeroes.o 00:05:36.176 CC lib/accel/accel_rpc.o 00:05:36.434 CC lib/blob/blob_bs_dev.o 00:05:36.434 CC lib/accel/accel_sw.o 00:05:36.691 CC lib/init/json_config.o 00:05:36.691 CC lib/init/subsystem.o 00:05:36.691 CC lib/init/subsystem_rpc.o 00:05:36.691 CC lib/virtio/virtio.o 00:05:36.691 CC lib/init/rpc.o 00:05:36.691 CC lib/fsdev/fsdev.o 00:05:36.691 CC lib/virtio/virtio_vhost_user.o 00:05:36.691 LIB libspdk_accel.a 00:05:36.692 CC lib/virtio/virtio_vfio_user.o 00:05:36.692 CC lib/virtio/virtio_pci.o 00:05:36.692 SO libspdk_accel.so.16.0 00:05:36.692 CC lib/fsdev/fsdev_io.o 00:05:36.692 SYMLINK libspdk_accel.so 00:05:36.692 CC lib/fsdev/fsdev_rpc.o 00:05:36.692 LIB libspdk_init.a 00:05:36.949 SO libspdk_init.so.6.0 00:05:36.949 SYMLINK libspdk_init.so 00:05:36.949 LIB libspdk_nvme.a 00:05:36.949 LIB libspdk_virtio.a 00:05:36.949 CC lib/bdev/bdev.o 00:05:36.949 CC lib/bdev/bdev_rpc.o 00:05:36.949 CC lib/bdev/bdev_zone.o 00:05:36.949 CC lib/bdev/part.o 00:05:36.949 SO libspdk_virtio.so.7.0 00:05:36.949 SYMLINK libspdk_virtio.so 00:05:36.949 SO libspdk_nvme.so.14.0 00:05:36.949 CC lib/event/app.o 00:05:37.207 CC lib/bdev/scsi_nvme.o 00:05:37.207 CC lib/event/reactor.o 00:05:37.207 CC lib/event/log_rpc.o 00:05:37.207 CC lib/event/app_rpc.o 00:05:37.207 SYMLINK libspdk_nvme.so 00:05:37.207 CC lib/event/scheduler_static.o 00:05:37.207 LIB libspdk_fsdev.a 00:05:37.207 SO libspdk_fsdev.so.1.0 00:05:37.465 SYMLINK libspdk_fsdev.so 00:05:37.465 LIB libspdk_event.a 00:05:37.465 SO libspdk_event.so.14.0 00:05:37.465 SYMLINK libspdk_event.so 00:05:37.465 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:38.445 LIB libspdk_fuse_dispatcher.a 00:05:38.446 SO libspdk_fuse_dispatcher.so.1.0 00:05:38.446 SYMLINK libspdk_fuse_dispatcher.so 00:05:39.040 LIB libspdk_bdev.a 00:05:39.040 SO libspdk_bdev.so.16.0 00:05:39.040 LIB libspdk_blob.a 00:05:39.297 SO libspdk_blob.so.11.0 00:05:39.297 SYMLINK libspdk_bdev.so 00:05:39.297 SYMLINK libspdk_blob.so 00:05:39.297 CC lib/nbd/nbd.o 00:05:39.297 CC lib/nbd/nbd_rpc.o 00:05:39.297 CC lib/ftl/ftl_core.o 00:05:39.297 CC lib/ftl/ftl_init.o 00:05:39.297 CC lib/ublk/ublk.o 00:05:39.297 CC lib/ftl/ftl_layout.o 00:05:39.297 CC lib/scsi/dev.o 00:05:39.297 CC lib/nvmf/ctrlr.o 00:05:39.554 CC lib/blobfs/blobfs.o 00:05:39.554 CC lib/lvol/lvol.o 00:05:39.554 CC lib/nvmf/ctrlr_discovery.o 00:05:39.554 CC lib/blobfs/tree.o 00:05:39.554 CC lib/scsi/lun.o 00:05:39.554 CC lib/nvmf/ctrlr_bdev.o 00:05:39.554 CC lib/nvmf/subsystem.o 00:05:39.811 CC lib/ftl/ftl_debug.o 00:05:39.811 LIB libspdk_nbd.a 00:05:39.811 SO libspdk_nbd.so.7.0 00:05:39.811 CC lib/scsi/port.o 00:05:39.811 SYMLINK libspdk_nbd.so 00:05:39.811 CC lib/scsi/scsi.o 00:05:40.068 CC lib/ftl/ftl_io.o 00:05:40.068 CC lib/ftl/ftl_sb.o 00:05:40.068 CC lib/nvmf/nvmf.o 00:05:40.068 CC lib/scsi/scsi_bdev.o 00:05:40.068 CC lib/ublk/ublk_rpc.o 00:05:40.068 LIB libspdk_blobfs.a 00:05:40.068 SO libspdk_blobfs.so.10.0 00:05:40.068 LIB libspdk_ublk.a 00:05:40.068 CC lib/ftl/ftl_l2p.o 00:05:40.068 LIB libspdk_lvol.a 00:05:40.068 CC lib/nvmf/nvmf_rpc.o 00:05:40.068 SYMLINK libspdk_blobfs.so 00:05:40.068 SO libspdk_ublk.so.3.0 00:05:40.068 CC lib/ftl/ftl_l2p_flat.o 00:05:40.068 CC lib/ftl/ftl_nv_cache.o 00:05:40.326 SO libspdk_lvol.so.10.0 00:05:40.326 SYMLINK libspdk_lvol.so 00:05:40.326 SYMLINK libspdk_ublk.so 00:05:40.326 CC lib/ftl/ftl_band.o 00:05:40.326 CC lib/ftl/ftl_band_ops.o 00:05:40.326 CC lib/ftl/ftl_writer.o 00:05:40.326 CC lib/ftl/ftl_rq.o 00:05:40.582 CC lib/ftl/ftl_reloc.o 00:05:40.582 CC lib/nvmf/transport.o 00:05:40.582 CC lib/scsi/scsi_pr.o 00:05:40.582 CC lib/ftl/ftl_l2p_cache.o 00:05:40.582 CC lib/ftl/ftl_p2l.o 00:05:40.839 CC lib/scsi/scsi_rpc.o 00:05:40.839 CC lib/scsi/task.o 00:05:40.839 CC lib/nvmf/tcp.o 00:05:40.839 CC lib/nvmf/stubs.o 00:05:40.839 CC lib/ftl/ftl_p2l_log.o 00:05:40.839 CC lib/nvmf/mdns_server.o 00:05:40.839 CC lib/nvmf/rdma.o 00:05:40.839 CC lib/nvmf/auth.o 00:05:40.839 CC lib/ftl/mngt/ftl_mngt.o 00:05:41.096 LIB libspdk_scsi.a 00:05:41.096 SO libspdk_scsi.so.9.0 00:05:41.096 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:41.096 SYMLINK libspdk_scsi.so 00:05:41.096 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:41.096 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:41.353 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:41.353 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:41.353 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:41.353 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:41.353 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:41.353 CC lib/iscsi/conn.o 00:05:41.353 CC lib/vhost/vhost.o 00:05:41.353 CC lib/vhost/vhost_rpc.o 00:05:41.611 CC lib/vhost/vhost_scsi.o 00:05:41.611 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:41.611 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:41.611 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:41.611 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:41.868 CC lib/ftl/utils/ftl_conf.o 00:05:41.868 CC lib/ftl/utils/ftl_md.o 00:05:41.868 CC lib/ftl/utils/ftl_mempool.o 00:05:41.868 CC lib/iscsi/init_grp.o 00:05:41.868 CC lib/ftl/utils/ftl_bitmap.o 00:05:41.868 CC lib/ftl/utils/ftl_property.o 00:05:41.868 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:41.868 CC lib/vhost/vhost_blk.o 00:05:41.868 CC lib/iscsi/iscsi.o 00:05:42.125 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:42.125 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:42.125 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:42.125 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:42.125 CC lib/iscsi/param.o 00:05:42.125 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:42.125 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:42.125 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:42.125 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:42.383 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:42.383 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:42.383 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:42.383 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:42.383 CC lib/vhost/rte_vhost_user.o 00:05:42.383 CC lib/iscsi/portal_grp.o 00:05:42.383 CC lib/ftl/base/ftl_base_dev.o 00:05:42.383 CC lib/ftl/base/ftl_base_bdev.o 00:05:42.383 CC lib/ftl/ftl_trace.o 00:05:42.641 CC lib/iscsi/tgt_node.o 00:05:42.641 CC lib/iscsi/iscsi_subsystem.o 00:05:42.641 CC lib/iscsi/iscsi_rpc.o 00:05:42.641 CC lib/iscsi/task.o 00:05:42.641 LIB libspdk_nvmf.a 00:05:42.641 LIB libspdk_ftl.a 00:05:42.641 SO libspdk_nvmf.so.19.0 00:05:42.899 SO libspdk_ftl.so.9.0 00:05:42.899 SYMLINK libspdk_nvmf.so 00:05:43.158 SYMLINK libspdk_ftl.so 00:05:43.158 LIB libspdk_iscsi.a 00:05:43.416 LIB libspdk_vhost.a 00:05:43.416 SO libspdk_iscsi.so.8.0 00:05:43.416 SO libspdk_vhost.so.8.0 00:05:43.416 SYMLINK libspdk_iscsi.so 00:05:43.416 SYMLINK libspdk_vhost.so 00:05:43.674 CC module/env_dpdk/env_dpdk_rpc.o 00:05:43.674 CC module/blob/bdev/blob_bdev.o 00:05:43.674 CC module/keyring/file/keyring.o 00:05:43.674 CC module/sock/posix/posix.o 00:05:43.674 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:43.674 CC module/keyring/linux/keyring.o 00:05:43.674 CC module/accel/error/accel_error.o 00:05:43.674 CC module/accel/dsa/accel_dsa.o 00:05:43.674 CC module/fsdev/aio/fsdev_aio.o 00:05:43.674 CC module/accel/ioat/accel_ioat.o 00:05:43.932 LIB libspdk_env_dpdk_rpc.a 00:05:43.932 SO libspdk_env_dpdk_rpc.so.6.0 00:05:43.932 SYMLINK libspdk_env_dpdk_rpc.so 00:05:43.932 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:43.932 CC module/keyring/linux/keyring_rpc.o 00:05:43.932 LIB libspdk_scheduler_dynamic.a 00:05:43.932 CC module/keyring/file/keyring_rpc.o 00:05:43.932 SO libspdk_scheduler_dynamic.so.4.0 00:05:43.932 CC module/accel/error/accel_error_rpc.o 00:05:43.932 CC module/accel/ioat/accel_ioat_rpc.o 00:05:43.932 SYMLINK libspdk_scheduler_dynamic.so 00:05:43.932 CC module/fsdev/aio/linux_aio_mgr.o 00:05:43.932 LIB libspdk_blob_bdev.a 00:05:43.932 LIB libspdk_keyring_linux.a 00:05:43.932 LIB libspdk_accel_error.a 00:05:43.932 SO libspdk_blob_bdev.so.11.0 00:05:43.932 CC module/accel/dsa/accel_dsa_rpc.o 00:05:43.932 LIB libspdk_keyring_file.a 00:05:43.932 SO libspdk_keyring_linux.so.1.0 00:05:43.932 SO libspdk_accel_error.so.2.0 00:05:44.189 SO libspdk_keyring_file.so.2.0 00:05:44.189 LIB libspdk_accel_ioat.a 00:05:44.189 SYMLINK libspdk_blob_bdev.so 00:05:44.189 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:44.189 SYMLINK libspdk_accel_error.so 00:05:44.189 SYMLINK libspdk_keyring_linux.so 00:05:44.189 SYMLINK libspdk_keyring_file.so 00:05:44.189 SO libspdk_accel_ioat.so.6.0 00:05:44.189 LIB libspdk_accel_dsa.a 00:05:44.189 SYMLINK libspdk_accel_ioat.so 00:05:44.189 SO libspdk_accel_dsa.so.5.0 00:05:44.189 CC module/accel/iaa/accel_iaa.o 00:05:44.189 LIB libspdk_scheduler_dpdk_governor.a 00:05:44.189 CC module/scheduler/gscheduler/gscheduler.o 00:05:44.189 SYMLINK libspdk_accel_dsa.so 00:05:44.189 CC module/accel/iaa/accel_iaa_rpc.o 00:05:44.189 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:44.189 CC module/bdev/delay/vbdev_delay.o 00:05:44.189 CC module/bdev/error/vbdev_error.o 00:05:44.189 CC module/blobfs/bdev/blobfs_bdev.o 00:05:44.447 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:44.447 CC module/bdev/gpt/gpt.o 00:05:44.447 CC module/bdev/error/vbdev_error_rpc.o 00:05:44.447 LIB libspdk_scheduler_gscheduler.a 00:05:44.447 SO libspdk_scheduler_gscheduler.so.4.0 00:05:44.447 LIB libspdk_accel_iaa.a 00:05:44.447 LIB libspdk_fsdev_aio.a 00:05:44.447 SO libspdk_accel_iaa.so.3.0 00:05:44.447 SYMLINK libspdk_scheduler_gscheduler.so 00:05:44.447 SO libspdk_fsdev_aio.so.1.0 00:05:44.447 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:44.447 SYMLINK libspdk_accel_iaa.so 00:05:44.447 CC module/bdev/gpt/vbdev_gpt.o 00:05:44.447 CC module/bdev/lvol/vbdev_lvol.o 00:05:44.447 LIB libspdk_sock_posix.a 00:05:44.447 LIB libspdk_bdev_error.a 00:05:44.447 SYMLINK libspdk_fsdev_aio.so 00:05:44.447 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:44.447 SO libspdk_sock_posix.so.6.0 00:05:44.447 SO libspdk_bdev_error.so.6.0 00:05:44.705 CC module/bdev/malloc/bdev_malloc.o 00:05:44.705 CC module/bdev/null/bdev_null.o 00:05:44.705 LIB libspdk_blobfs_bdev.a 00:05:44.705 SYMLINK libspdk_sock_posix.so 00:05:44.705 SYMLINK libspdk_bdev_error.so 00:05:44.705 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:44.705 CC module/bdev/nvme/bdev_nvme.o 00:05:44.705 SO libspdk_blobfs_bdev.so.6.0 00:05:44.705 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:44.705 SYMLINK libspdk_blobfs_bdev.so 00:05:44.705 CC module/bdev/null/bdev_null_rpc.o 00:05:44.705 CC module/bdev/passthru/vbdev_passthru.o 00:05:44.705 LIB libspdk_bdev_gpt.a 00:05:44.705 LIB libspdk_bdev_delay.a 00:05:44.705 SO libspdk_bdev_gpt.so.6.0 00:05:44.705 SO libspdk_bdev_delay.so.6.0 00:05:44.705 CC module/bdev/nvme/nvme_rpc.o 00:05:44.705 SYMLINK libspdk_bdev_gpt.so 00:05:44.705 CC module/bdev/nvme/bdev_mdns_client.o 00:05:44.705 SYMLINK libspdk_bdev_delay.so 00:05:44.705 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:44.963 LIB libspdk_bdev_null.a 00:05:44.963 SO libspdk_bdev_null.so.6.0 00:05:44.963 SYMLINK libspdk_bdev_null.so 00:05:44.963 CC module/bdev/raid/bdev_raid.o 00:05:44.963 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:44.963 LIB libspdk_bdev_lvol.a 00:05:44.963 LIB libspdk_bdev_malloc.a 00:05:44.963 CC module/bdev/split/vbdev_split.o 00:05:44.963 CC module/bdev/raid/bdev_raid_rpc.o 00:05:44.963 SO libspdk_bdev_malloc.so.6.0 00:05:44.963 SO libspdk_bdev_lvol.so.6.0 00:05:44.963 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:44.963 SYMLINK libspdk_bdev_malloc.so 00:05:44.963 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:44.963 SYMLINK libspdk_bdev_lvol.so 00:05:44.963 CC module/bdev/split/vbdev_split_rpc.o 00:05:45.221 CC module/bdev/xnvme/bdev_xnvme.o 00:05:45.221 LIB libspdk_bdev_passthru.a 00:05:45.221 SO libspdk_bdev_passthru.so.6.0 00:05:45.221 SYMLINK libspdk_bdev_passthru.so 00:05:45.221 CC module/bdev/raid/bdev_raid_sb.o 00:05:45.221 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:45.221 LIB libspdk_bdev_split.a 00:05:45.221 SO libspdk_bdev_split.so.6.0 00:05:45.221 CC module/bdev/nvme/vbdev_opal.o 00:05:45.221 CC module/bdev/raid/raid0.o 00:05:45.221 CC module/bdev/aio/bdev_aio.o 00:05:45.221 SYMLINK libspdk_bdev_split.so 00:05:45.221 LIB libspdk_bdev_zone_block.a 00:05:45.221 CC module/bdev/ftl/bdev_ftl.o 00:05:45.221 SO libspdk_bdev_zone_block.so.6.0 00:05:45.480 LIB libspdk_bdev_xnvme.a 00:05:45.480 SYMLINK libspdk_bdev_zone_block.so 00:05:45.480 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:45.480 SO libspdk_bdev_xnvme.so.3.0 00:05:45.480 CC module/bdev/iscsi/bdev_iscsi.o 00:05:45.480 SYMLINK libspdk_bdev_xnvme.so 00:05:45.480 CC module/bdev/raid/raid1.o 00:05:45.480 CC module/bdev/raid/concat.o 00:05:45.480 CC module/bdev/aio/bdev_aio_rpc.o 00:05:45.480 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:45.480 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:45.480 LIB libspdk_bdev_ftl.a 00:05:45.738 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:45.738 SO libspdk_bdev_ftl.so.6.0 00:05:45.738 LIB libspdk_bdev_aio.a 00:05:45.738 SO libspdk_bdev_aio.so.6.0 00:05:45.738 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:45.738 SYMLINK libspdk_bdev_ftl.so 00:05:45.738 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:45.738 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:45.738 SYMLINK libspdk_bdev_aio.so 00:05:45.739 LIB libspdk_bdev_raid.a 00:05:45.739 LIB libspdk_bdev_iscsi.a 00:05:45.739 SO libspdk_bdev_raid.so.6.0 00:05:45.739 SO libspdk_bdev_iscsi.so.6.0 00:05:45.997 SYMLINK libspdk_bdev_raid.so 00:05:45.997 SYMLINK libspdk_bdev_iscsi.so 00:05:45.997 LIB libspdk_bdev_virtio.a 00:05:45.997 SO libspdk_bdev_virtio.so.6.0 00:05:46.254 SYMLINK libspdk_bdev_virtio.so 00:05:46.819 LIB libspdk_bdev_nvme.a 00:05:46.819 SO libspdk_bdev_nvme.so.7.0 00:05:47.077 SYMLINK libspdk_bdev_nvme.so 00:05:47.334 CC module/event/subsystems/iobuf/iobuf.o 00:05:47.334 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:47.334 CC module/event/subsystems/keyring/keyring.o 00:05:47.334 CC module/event/subsystems/fsdev/fsdev.o 00:05:47.334 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:47.334 CC module/event/subsystems/sock/sock.o 00:05:47.334 CC module/event/subsystems/scheduler/scheduler.o 00:05:47.335 CC module/event/subsystems/vmd/vmd.o 00:05:47.335 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:47.592 LIB libspdk_event_scheduler.a 00:05:47.592 LIB libspdk_event_keyring.a 00:05:47.592 LIB libspdk_event_vmd.a 00:05:47.592 LIB libspdk_event_vhost_blk.a 00:05:47.592 LIB libspdk_event_fsdev.a 00:05:47.592 LIB libspdk_event_sock.a 00:05:47.592 SO libspdk_event_keyring.so.1.0 00:05:47.592 SO libspdk_event_scheduler.so.4.0 00:05:47.592 LIB libspdk_event_iobuf.a 00:05:47.592 SO libspdk_event_vhost_blk.so.3.0 00:05:47.592 SO libspdk_event_vmd.so.6.0 00:05:47.592 SO libspdk_event_sock.so.5.0 00:05:47.592 SO libspdk_event_fsdev.so.1.0 00:05:47.592 SO libspdk_event_iobuf.so.3.0 00:05:47.592 SYMLINK libspdk_event_keyring.so 00:05:47.592 SYMLINK libspdk_event_scheduler.so 00:05:47.592 SYMLINK libspdk_event_vhost_blk.so 00:05:47.592 SYMLINK libspdk_event_sock.so 00:05:47.592 SYMLINK libspdk_event_fsdev.so 00:05:47.592 SYMLINK libspdk_event_vmd.so 00:05:47.592 SYMLINK libspdk_event_iobuf.so 00:05:47.850 CC module/event/subsystems/accel/accel.o 00:05:48.108 LIB libspdk_event_accel.a 00:05:48.108 SO libspdk_event_accel.so.6.0 00:05:48.108 SYMLINK libspdk_event_accel.so 00:05:48.366 CC module/event/subsystems/bdev/bdev.o 00:05:48.366 LIB libspdk_event_bdev.a 00:05:48.366 SO libspdk_event_bdev.so.6.0 00:05:48.624 SYMLINK libspdk_event_bdev.so 00:05:48.624 CC module/event/subsystems/nbd/nbd.o 00:05:48.624 CC module/event/subsystems/scsi/scsi.o 00:05:48.624 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:48.624 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:48.624 CC module/event/subsystems/ublk/ublk.o 00:05:48.933 LIB libspdk_event_nbd.a 00:05:48.933 LIB libspdk_event_scsi.a 00:05:48.933 LIB libspdk_event_ublk.a 00:05:48.933 SO libspdk_event_nbd.so.6.0 00:05:48.933 SO libspdk_event_ublk.so.3.0 00:05:48.933 SO libspdk_event_scsi.so.6.0 00:05:48.933 SYMLINK libspdk_event_nbd.so 00:05:48.933 SYMLINK libspdk_event_scsi.so 00:05:48.933 SYMLINK libspdk_event_ublk.so 00:05:48.933 LIB libspdk_event_nvmf.a 00:05:48.933 SO libspdk_event_nvmf.so.6.0 00:05:48.933 SYMLINK libspdk_event_nvmf.so 00:05:49.238 CC module/event/subsystems/iscsi/iscsi.o 00:05:49.238 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:49.238 LIB libspdk_event_vhost_scsi.a 00:05:49.238 LIB libspdk_event_iscsi.a 00:05:49.238 SO libspdk_event_vhost_scsi.so.3.0 00:05:49.238 SO libspdk_event_iscsi.so.6.0 00:05:49.238 SYMLINK libspdk_event_vhost_scsi.so 00:05:49.238 SYMLINK libspdk_event_iscsi.so 00:05:49.496 SO libspdk.so.6.0 00:05:49.496 SYMLINK libspdk.so 00:05:49.496 CC test/rpc_client/rpc_client_test.o 00:05:49.496 CXX app/trace/trace.o 00:05:49.496 TEST_HEADER include/spdk/accel.h 00:05:49.496 TEST_HEADER include/spdk/accel_module.h 00:05:49.496 TEST_HEADER include/spdk/assert.h 00:05:49.496 TEST_HEADER include/spdk/barrier.h 00:05:49.496 TEST_HEADER include/spdk/base64.h 00:05:49.496 TEST_HEADER include/spdk/bdev.h 00:05:49.496 TEST_HEADER include/spdk/bdev_module.h 00:05:49.496 TEST_HEADER include/spdk/bdev_zone.h 00:05:49.496 TEST_HEADER include/spdk/bit_array.h 00:05:49.496 TEST_HEADER include/spdk/bit_pool.h 00:05:49.496 TEST_HEADER include/spdk/blob_bdev.h 00:05:49.496 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:49.496 TEST_HEADER include/spdk/blobfs.h 00:05:49.496 TEST_HEADER include/spdk/blob.h 00:05:49.496 TEST_HEADER include/spdk/conf.h 00:05:49.496 TEST_HEADER include/spdk/config.h 00:05:49.496 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:49.496 TEST_HEADER include/spdk/cpuset.h 00:05:49.496 TEST_HEADER include/spdk/crc16.h 00:05:49.496 TEST_HEADER include/spdk/crc32.h 00:05:49.496 TEST_HEADER include/spdk/crc64.h 00:05:49.496 TEST_HEADER include/spdk/dif.h 00:05:49.496 TEST_HEADER include/spdk/dma.h 00:05:49.496 TEST_HEADER include/spdk/endian.h 00:05:49.496 TEST_HEADER include/spdk/env_dpdk.h 00:05:49.496 TEST_HEADER include/spdk/env.h 00:05:49.754 TEST_HEADER include/spdk/event.h 00:05:49.754 TEST_HEADER include/spdk/fd_group.h 00:05:49.754 TEST_HEADER include/spdk/fd.h 00:05:49.754 TEST_HEADER include/spdk/file.h 00:05:49.754 TEST_HEADER include/spdk/fsdev.h 00:05:49.754 TEST_HEADER include/spdk/fsdev_module.h 00:05:49.754 TEST_HEADER include/spdk/ftl.h 00:05:49.754 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:49.754 TEST_HEADER include/spdk/gpt_spec.h 00:05:49.754 TEST_HEADER include/spdk/hexlify.h 00:05:49.754 TEST_HEADER include/spdk/histogram_data.h 00:05:49.754 TEST_HEADER include/spdk/idxd.h 00:05:49.754 TEST_HEADER include/spdk/idxd_spec.h 00:05:49.754 CC examples/util/zipf/zipf.o 00:05:49.754 TEST_HEADER include/spdk/init.h 00:05:49.754 TEST_HEADER include/spdk/ioat.h 00:05:49.754 TEST_HEADER include/spdk/ioat_spec.h 00:05:49.754 CC test/thread/poller_perf/poller_perf.o 00:05:49.754 TEST_HEADER include/spdk/iscsi_spec.h 00:05:49.754 TEST_HEADER include/spdk/json.h 00:05:49.754 CC examples/ioat/perf/perf.o 00:05:49.754 TEST_HEADER include/spdk/jsonrpc.h 00:05:49.754 TEST_HEADER include/spdk/keyring.h 00:05:49.754 TEST_HEADER include/spdk/keyring_module.h 00:05:49.754 TEST_HEADER include/spdk/likely.h 00:05:49.754 TEST_HEADER include/spdk/log.h 00:05:49.754 TEST_HEADER include/spdk/lvol.h 00:05:49.754 TEST_HEADER include/spdk/md5.h 00:05:49.754 TEST_HEADER include/spdk/memory.h 00:05:49.754 TEST_HEADER include/spdk/mmio.h 00:05:49.754 TEST_HEADER include/spdk/nbd.h 00:05:49.754 TEST_HEADER include/spdk/net.h 00:05:49.754 TEST_HEADER include/spdk/notify.h 00:05:49.754 TEST_HEADER include/spdk/nvme.h 00:05:49.754 TEST_HEADER include/spdk/nvme_intel.h 00:05:49.754 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:49.754 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:49.754 TEST_HEADER include/spdk/nvme_spec.h 00:05:49.754 TEST_HEADER include/spdk/nvme_zns.h 00:05:49.754 CC test/dma/test_dma/test_dma.o 00:05:49.754 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:49.754 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:49.754 TEST_HEADER include/spdk/nvmf.h 00:05:49.754 TEST_HEADER include/spdk/nvmf_spec.h 00:05:49.754 TEST_HEADER include/spdk/nvmf_transport.h 00:05:49.754 TEST_HEADER include/spdk/opal.h 00:05:49.754 TEST_HEADER include/spdk/opal_spec.h 00:05:49.754 TEST_HEADER include/spdk/pci_ids.h 00:05:49.754 TEST_HEADER include/spdk/pipe.h 00:05:49.754 TEST_HEADER include/spdk/queue.h 00:05:49.754 CC test/app/bdev_svc/bdev_svc.o 00:05:49.754 TEST_HEADER include/spdk/reduce.h 00:05:49.754 TEST_HEADER include/spdk/rpc.h 00:05:49.754 TEST_HEADER include/spdk/scheduler.h 00:05:49.754 TEST_HEADER include/spdk/scsi.h 00:05:49.754 CC test/env/mem_callbacks/mem_callbacks.o 00:05:49.754 TEST_HEADER include/spdk/scsi_spec.h 00:05:49.754 TEST_HEADER include/spdk/sock.h 00:05:49.754 TEST_HEADER include/spdk/stdinc.h 00:05:49.754 TEST_HEADER include/spdk/string.h 00:05:49.754 TEST_HEADER include/spdk/thread.h 00:05:49.754 TEST_HEADER include/spdk/trace.h 00:05:49.754 TEST_HEADER include/spdk/trace_parser.h 00:05:49.754 TEST_HEADER include/spdk/tree.h 00:05:49.754 TEST_HEADER include/spdk/ublk.h 00:05:49.754 TEST_HEADER include/spdk/util.h 00:05:49.754 TEST_HEADER include/spdk/uuid.h 00:05:49.754 TEST_HEADER include/spdk/version.h 00:05:49.754 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:49.754 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:49.754 TEST_HEADER include/spdk/vhost.h 00:05:49.754 TEST_HEADER include/spdk/vmd.h 00:05:49.754 TEST_HEADER include/spdk/xor.h 00:05:49.754 TEST_HEADER include/spdk/zipf.h 00:05:49.754 CXX test/cpp_headers/accel.o 00:05:49.754 LINK rpc_client_test 00:05:49.754 LINK poller_perf 00:05:49.754 LINK interrupt_tgt 00:05:49.754 LINK zipf 00:05:50.014 LINK bdev_svc 00:05:50.014 LINK ioat_perf 00:05:50.014 CXX test/cpp_headers/accel_module.o 00:05:50.014 LINK spdk_trace 00:05:50.014 CC test/app/jsoncat/jsoncat.o 00:05:50.014 CC test/app/histogram_perf/histogram_perf.o 00:05:50.014 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:50.014 CXX test/cpp_headers/assert.o 00:05:50.014 CC examples/ioat/verify/verify.o 00:05:50.014 CXX test/cpp_headers/barrier.o 00:05:50.014 CC test/event/event_perf/event_perf.o 00:05:50.272 LINK histogram_perf 00:05:50.272 LINK jsoncat 00:05:50.273 CC app/trace_record/trace_record.o 00:05:50.273 LINK test_dma 00:05:50.273 LINK mem_callbacks 00:05:50.273 CXX test/cpp_headers/base64.o 00:05:50.273 CC test/app/stub/stub.o 00:05:50.273 CXX test/cpp_headers/bdev.o 00:05:50.273 LINK event_perf 00:05:50.273 CXX test/cpp_headers/bdev_module.o 00:05:50.273 LINK verify 00:05:50.273 CXX test/cpp_headers/bdev_zone.o 00:05:50.273 CC test/env/vtophys/vtophys.o 00:05:50.531 LINK spdk_trace_record 00:05:50.531 LINK stub 00:05:50.531 CC test/event/reactor/reactor.o 00:05:50.531 LINK nvme_fuzz 00:05:50.531 LINK vtophys 00:05:50.531 CXX test/cpp_headers/bit_array.o 00:05:50.531 CC test/accel/dif/dif.o 00:05:50.531 CC examples/sock/hello_world/hello_sock.o 00:05:50.531 CC examples/thread/thread/thread_ex.o 00:05:50.531 CC test/blobfs/mkfs/mkfs.o 00:05:50.531 LINK reactor 00:05:50.531 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.790 CXX test/cpp_headers/bit_pool.o 00:05:50.790 CC app/nvmf_tgt/nvmf_main.o 00:05:50.790 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:50.790 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:50.790 CC test/event/reactor_perf/reactor_perf.o 00:05:50.790 LINK mkfs 00:05:50.790 CXX test/cpp_headers/blob_bdev.o 00:05:50.790 LINK nvmf_tgt 00:05:50.790 LINK thread 00:05:50.790 LINK iscsi_tgt 00:05:50.790 LINK hello_sock 00:05:50.790 LINK env_dpdk_post_init 00:05:50.790 LINK reactor_perf 00:05:51.048 CXX test/cpp_headers/blobfs_bdev.o 00:05:51.048 CXX test/cpp_headers/blobfs.o 00:05:51.048 CXX test/cpp_headers/blob.o 00:05:51.048 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:51.048 CC test/env/memory/memory_ut.o 00:05:51.048 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:51.048 CC test/event/app_repeat/app_repeat.o 00:05:51.048 CC examples/vmd/lsvmd/lsvmd.o 00:05:51.048 CXX test/cpp_headers/conf.o 00:05:51.048 CC app/spdk_tgt/spdk_tgt.o 00:05:51.048 CC app/spdk_lspci/spdk_lspci.o 00:05:51.306 CC examples/vmd/led/led.o 00:05:51.306 LINK lsvmd 00:05:51.306 CXX test/cpp_headers/config.o 00:05:51.306 LINK app_repeat 00:05:51.306 CXX test/cpp_headers/cpuset.o 00:05:51.306 LINK dif 00:05:51.306 LINK spdk_lspci 00:05:51.306 LINK led 00:05:51.306 LINK spdk_tgt 00:05:51.306 CC test/env/pci/pci_ut.o 00:05:51.306 CXX test/cpp_headers/crc16.o 00:05:51.306 CXX test/cpp_headers/crc32.o 00:05:51.565 CXX test/cpp_headers/crc64.o 00:05:51.565 CC test/event/scheduler/scheduler.o 00:05:51.565 LINK vhost_fuzz 00:05:51.565 CXX test/cpp_headers/dif.o 00:05:51.565 CC app/spdk_nvme_perf/perf.o 00:05:51.565 CXX test/cpp_headers/dma.o 00:05:51.565 CXX test/cpp_headers/endian.o 00:05:51.565 CC examples/idxd/perf/perf.o 00:05:51.565 CXX test/cpp_headers/env_dpdk.o 00:05:51.565 LINK scheduler 00:05:51.565 LINK pci_ut 00:05:51.823 CXX test/cpp_headers/env.o 00:05:51.823 CC app/spdk_nvme_identify/identify.o 00:05:51.823 CC app/spdk_nvme_discover/discovery_aer.o 00:05:51.823 CC app/spdk_top/spdk_top.o 00:05:51.823 CXX test/cpp_headers/event.o 00:05:51.823 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:51.823 LINK idxd_perf 00:05:52.082 CXX test/cpp_headers/fd_group.o 00:05:52.082 LINK spdk_nvme_discover 00:05:52.082 CC examples/accel/perf/accel_perf.o 00:05:52.082 CXX test/cpp_headers/fd.o 00:05:52.082 LINK hello_fsdev 00:05:52.082 LINK memory_ut 00:05:52.082 CC app/vhost/vhost.o 00:05:52.082 CXX test/cpp_headers/file.o 00:05:52.341 CC test/lvol/esnap/esnap.o 00:05:52.341 LINK vhost 00:05:52.341 LINK iscsi_fuzz 00:05:52.341 CXX test/cpp_headers/fsdev.o 00:05:52.341 LINK spdk_nvme_perf 00:05:52.341 LINK spdk_nvme_identify 00:05:52.341 CC examples/blob/hello_world/hello_blob.o 00:05:52.341 CC test/nvme/aer/aer.o 00:05:52.598 LINK accel_perf 00:05:52.598 CXX test/cpp_headers/fsdev_module.o 00:05:52.598 LINK spdk_top 00:05:52.598 CC test/nvme/reset/reset.o 00:05:52.598 CXX test/cpp_headers/ftl.o 00:05:52.598 CC app/spdk_dd/spdk_dd.o 00:05:52.598 CC test/nvme/sgl/sgl.o 00:05:52.598 LINK hello_blob 00:05:52.598 LINK aer 00:05:52.598 CC examples/blob/cli/blobcli.o 00:05:52.598 CC test/nvme/e2edp/nvme_dp.o 00:05:52.856 CXX test/cpp_headers/fuse_dispatcher.o 00:05:52.856 LINK reset 00:05:52.856 CC examples/nvme/hello_world/hello_world.o 00:05:52.856 CXX test/cpp_headers/gpt_spec.o 00:05:52.856 LINK sgl 00:05:52.856 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:52.856 CC examples/nvme/reconnect/reconnect.o 00:05:52.856 LINK nvme_dp 00:05:52.856 LINK spdk_dd 00:05:52.856 CC test/nvme/overhead/overhead.o 00:05:52.856 CXX test/cpp_headers/hexlify.o 00:05:53.114 LINK hello_world 00:05:53.114 CC examples/nvme/arbitration/arbitration.o 00:05:53.114 CC examples/nvme/hotplug/hotplug.o 00:05:53.114 CXX test/cpp_headers/histogram_data.o 00:05:53.114 LINK reconnect 00:05:53.114 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:53.114 LINK blobcli 00:05:53.114 CXX test/cpp_headers/idxd.o 00:05:53.114 LINK overhead 00:05:53.114 CC app/fio/nvme/fio_plugin.o 00:05:53.372 LINK hotplug 00:05:53.372 LINK arbitration 00:05:53.372 LINK nvme_manage 00:05:53.372 CXX test/cpp_headers/idxd_spec.o 00:05:53.372 LINK cmb_copy 00:05:53.372 CC test/nvme/err_injection/err_injection.o 00:05:53.372 CXX test/cpp_headers/init.o 00:05:53.372 CC examples/bdev/hello_world/hello_bdev.o 00:05:53.372 CC examples/nvme/abort/abort.o 00:05:53.372 CC test/nvme/startup/startup.o 00:05:53.630 CC test/nvme/reserve/reserve.o 00:05:53.630 CC test/bdev/bdevio/bdevio.o 00:05:53.630 CC test/nvme/simple_copy/simple_copy.o 00:05:53.630 LINK err_injection 00:05:53.630 CXX test/cpp_headers/ioat.o 00:05:53.630 LINK startup 00:05:53.630 LINK hello_bdev 00:05:53.630 CXX test/cpp_headers/ioat_spec.o 00:05:53.630 LINK reserve 00:05:53.630 LINK abort 00:05:53.630 LINK simple_copy 00:05:53.887 CC test/nvme/connect_stress/connect_stress.o 00:05:53.887 CC test/nvme/boot_partition/boot_partition.o 00:05:53.887 LINK spdk_nvme 00:05:53.887 CXX test/cpp_headers/iscsi_spec.o 00:05:53.887 CC examples/bdev/bdevperf/bdevperf.o 00:05:53.887 CXX test/cpp_headers/json.o 00:05:53.887 LINK bdevio 00:05:53.887 LINK connect_stress 00:05:53.887 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:53.887 LINK boot_partition 00:05:53.887 CC test/nvme/compliance/nvme_compliance.o 00:05:53.887 CC app/fio/bdev/fio_plugin.o 00:05:53.887 CC test/nvme/fused_ordering/fused_ordering.o 00:05:53.887 CXX test/cpp_headers/jsonrpc.o 00:05:54.146 LINK pmr_persistence 00:05:54.146 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:54.146 CC test/nvme/fdp/fdp.o 00:05:54.146 CXX test/cpp_headers/keyring.o 00:05:54.146 CC test/nvme/cuse/cuse.o 00:05:54.146 LINK fused_ordering 00:05:54.146 CXX test/cpp_headers/keyring_module.o 00:05:54.146 CXX test/cpp_headers/likely.o 00:05:54.146 LINK doorbell_aers 00:05:54.412 LINK nvme_compliance 00:05:54.412 CXX test/cpp_headers/log.o 00:05:54.412 CXX test/cpp_headers/lvol.o 00:05:54.412 CXX test/cpp_headers/md5.o 00:05:54.412 CXX test/cpp_headers/memory.o 00:05:54.412 CXX test/cpp_headers/mmio.o 00:05:54.412 LINK fdp 00:05:54.412 CXX test/cpp_headers/nbd.o 00:05:54.412 LINK spdk_bdev 00:05:54.412 CXX test/cpp_headers/net.o 00:05:54.412 CXX test/cpp_headers/notify.o 00:05:54.412 CXX test/cpp_headers/nvme.o 00:05:54.412 CXX test/cpp_headers/nvme_intel.o 00:05:54.412 CXX test/cpp_headers/nvme_ocssd.o 00:05:54.678 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:54.678 CXX test/cpp_headers/nvme_spec.o 00:05:54.678 CXX test/cpp_headers/nvme_zns.o 00:05:54.678 CXX test/cpp_headers/nvmf_cmd.o 00:05:54.678 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:54.678 CXX test/cpp_headers/nvmf.o 00:05:54.678 CXX test/cpp_headers/nvmf_spec.o 00:05:54.678 LINK bdevperf 00:05:54.678 CXX test/cpp_headers/nvmf_transport.o 00:05:54.678 CXX test/cpp_headers/opal.o 00:05:54.678 CXX test/cpp_headers/opal_spec.o 00:05:54.678 CXX test/cpp_headers/pci_ids.o 00:05:54.678 CXX test/cpp_headers/pipe.o 00:05:54.678 CXX test/cpp_headers/queue.o 00:05:54.678 CXX test/cpp_headers/reduce.o 00:05:54.678 CXX test/cpp_headers/rpc.o 00:05:54.936 CXX test/cpp_headers/scheduler.o 00:05:54.936 CXX test/cpp_headers/scsi.o 00:05:54.936 CXX test/cpp_headers/scsi_spec.o 00:05:54.936 CXX test/cpp_headers/sock.o 00:05:54.936 CXX test/cpp_headers/stdinc.o 00:05:54.936 CXX test/cpp_headers/string.o 00:05:54.936 CXX test/cpp_headers/thread.o 00:05:54.936 CXX test/cpp_headers/trace.o 00:05:54.936 CC examples/nvmf/nvmf/nvmf.o 00:05:54.936 CXX test/cpp_headers/trace_parser.o 00:05:54.936 CXX test/cpp_headers/tree.o 00:05:54.936 CXX test/cpp_headers/ublk.o 00:05:54.936 CXX test/cpp_headers/util.o 00:05:54.936 CXX test/cpp_headers/uuid.o 00:05:54.936 CXX test/cpp_headers/version.o 00:05:54.936 CXX test/cpp_headers/vfio_user_pci.o 00:05:54.936 CXX test/cpp_headers/vfio_user_spec.o 00:05:55.194 CXX test/cpp_headers/vhost.o 00:05:55.194 CXX test/cpp_headers/vmd.o 00:05:55.194 CXX test/cpp_headers/xor.o 00:05:55.194 CXX test/cpp_headers/zipf.o 00:05:55.194 LINK nvmf 00:05:55.452 LINK cuse 00:05:56.822 LINK esnap 00:05:57.080 00:05:57.080 real 1m0.569s 00:05:57.080 user 5m12.733s 00:05:57.080 sys 0m53.740s 00:05:57.080 11:01:25 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:57.080 11:01:25 make -- common/autotest_common.sh@10 -- $ set +x 00:05:57.080 ************************************ 00:05:57.080 END TEST make 00:05:57.080 ************************************ 00:05:57.080 11:01:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:57.080 11:01:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:57.080 11:01:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:57.080 11:01:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.080 11:01:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:57.080 11:01:25 -- pm/common@44 -- $ pid=5800 00:05:57.080 11:01:25 -- pm/common@50 -- $ kill -TERM 5800 00:05:57.080 11:01:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.080 11:01:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:57.080 11:01:25 -- pm/common@44 -- $ pid=5802 00:05:57.080 11:01:25 -- pm/common@50 -- $ kill -TERM 5802 00:05:57.338 11:01:25 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.338 11:01:25 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.338 11:01:25 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.338 11:01:26 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.338 11:01:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.338 11:01:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.338 11:01:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.338 11:01:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.338 11:01:26 -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.338 11:01:26 -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.338 11:01:26 -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.338 11:01:26 -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.338 11:01:26 -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.338 11:01:26 -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.338 11:01:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.338 11:01:26 -- scripts/common.sh@344 -- # case "$op" in 00:05:57.338 11:01:26 -- scripts/common.sh@345 -- # : 1 00:05:57.338 11:01:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.338 11:01:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.338 11:01:26 -- scripts/common.sh@365 -- # decimal 1 00:05:57.338 11:01:26 -- scripts/common.sh@353 -- # local d=1 00:05:57.338 11:01:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.338 11:01:26 -- scripts/common.sh@355 -- # echo 1 00:05:57.338 11:01:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.338 11:01:26 -- scripts/common.sh@366 -- # decimal 2 00:05:57.338 11:01:26 -- scripts/common.sh@353 -- # local d=2 00:05:57.338 11:01:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.338 11:01:26 -- scripts/common.sh@355 -- # echo 2 00:05:57.338 11:01:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.338 11:01:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.338 11:01:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.338 11:01:26 -- scripts/common.sh@368 -- # return 0 00:05:57.338 11:01:26 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.338 11:01:26 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.338 --rc genhtml_branch_coverage=1 00:05:57.338 --rc genhtml_function_coverage=1 00:05:57.338 --rc genhtml_legend=1 00:05:57.338 --rc geninfo_all_blocks=1 00:05:57.338 --rc geninfo_unexecuted_blocks=1 00:05:57.338 00:05:57.339 ' 00:05:57.339 11:01:26 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.339 --rc genhtml_branch_coverage=1 00:05:57.339 --rc genhtml_function_coverage=1 00:05:57.339 --rc genhtml_legend=1 00:05:57.339 --rc geninfo_all_blocks=1 00:05:57.339 --rc geninfo_unexecuted_blocks=1 00:05:57.339 00:05:57.339 ' 00:05:57.339 11:01:26 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.339 --rc genhtml_branch_coverage=1 00:05:57.339 --rc genhtml_function_coverage=1 00:05:57.339 --rc genhtml_legend=1 00:05:57.339 --rc geninfo_all_blocks=1 00:05:57.339 --rc geninfo_unexecuted_blocks=1 00:05:57.339 00:05:57.339 ' 00:05:57.339 11:01:26 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.339 --rc genhtml_branch_coverage=1 00:05:57.339 --rc genhtml_function_coverage=1 00:05:57.339 --rc genhtml_legend=1 00:05:57.339 --rc geninfo_all_blocks=1 00:05:57.339 --rc geninfo_unexecuted_blocks=1 00:05:57.339 00:05:57.339 ' 00:05:57.339 11:01:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.339 11:01:26 -- nvmf/common.sh@7 -- # uname -s 00:05:57.339 11:01:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.339 11:01:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.339 11:01:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.339 11:01:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.339 11:01:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.339 11:01:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.339 11:01:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.339 11:01:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.339 11:01:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.339 11:01:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.339 11:01:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b2b8603c-1fc7-4e5b-8078-2e6f24a83076 00:05:57.339 11:01:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=b2b8603c-1fc7-4e5b-8078-2e6f24a83076 00:05:57.339 11:01:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.339 11:01:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.339 11:01:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.339 11:01:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.339 11:01:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.339 11:01:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.339 11:01:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.339 11:01:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.339 11:01:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.339 11:01:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.339 11:01:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.339 11:01:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.339 11:01:26 -- paths/export.sh@5 -- # export PATH 00:05:57.339 11:01:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.339 11:01:26 -- nvmf/common.sh@51 -- # : 0 00:05:57.339 11:01:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.339 11:01:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.339 11:01:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.339 11:01:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.339 11:01:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.339 11:01:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.339 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.339 11:01:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.339 11:01:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.339 11:01:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.339 11:01:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:57.339 11:01:26 -- spdk/autotest.sh@32 -- # uname -s 00:05:57.339 11:01:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:57.339 11:01:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:57.339 11:01:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:57.339 11:01:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:57.339 11:01:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:57.339 11:01:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:57.339 11:01:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:57.339 11:01:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:57.339 11:01:26 -- spdk/autotest.sh@48 -- # udevadm_pid=66963 00:05:57.339 11:01:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:57.339 11:01:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:57.339 11:01:26 -- pm/common@17 -- # local monitor 00:05:57.339 11:01:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.339 11:01:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:57.339 11:01:26 -- pm/common@25 -- # sleep 1 00:05:57.339 11:01:26 -- pm/common@21 -- # date +%s 00:05:57.339 11:01:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732705286 00:05:57.339 11:01:26 -- pm/common@21 -- # date +%s 00:05:57.339 11:01:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732705286 00:05:57.339 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732705286_collect-vmstat.pm.log 00:05:57.339 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732705286_collect-cpu-load.pm.log 00:05:58.273 11:01:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:58.273 11:01:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:58.273 11:01:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.273 11:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:58.531 11:01:27 -- spdk/autotest.sh@59 -- # create_test_list 00:05:58.531 11:01:27 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:58.531 11:01:27 -- common/autotest_common.sh@10 -- # set +x 00:05:58.531 11:01:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:58.531 11:01:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:58.531 11:01:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:58.531 11:01:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:58.531 11:01:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:58.531 11:01:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:58.531 11:01:27 -- common/autotest_common.sh@1455 -- # uname 00:05:58.531 11:01:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:58.531 11:01:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:58.531 11:01:27 -- common/autotest_common.sh@1475 -- # uname 00:05:58.531 11:01:27 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:58.531 11:01:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:58.531 11:01:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:58.531 lcov: LCOV version 1.15 00:05:58.531 11:01:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:13.415 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:13.415 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:28.383 11:01:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:28.383 11:01:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.383 11:01:56 -- common/autotest_common.sh@10 -- # set +x 00:06:28.383 11:01:56 -- spdk/autotest.sh@78 -- # rm -f 00:06:28.383 11:01:56 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:28.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.217 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:29.217 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:29.217 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:29.217 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:29.217 11:01:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:29.217 11:01:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:29.217 11:01:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:29.217 11:01:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:29.217 11:01:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:29.217 11:01:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:29.217 11:01:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:29.217 11:01:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:29.217 11:01:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:29.217 11:01:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:29.217 11:01:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:29.217 11:01:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:29.217 11:01:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:29.217 No valid GPT data, bailing 00:06:29.218 11:01:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:29.218 11:01:58 -- scripts/common.sh@394 -- # pt= 00:06:29.479 11:01:58 -- scripts/common.sh@395 -- # return 1 00:06:29.479 11:01:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:29.479 1+0 records in 00:06:29.479 1+0 records out 00:06:29.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00644089 s, 163 MB/s 00:06:29.479 11:01:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:29.479 11:01:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:29.479 11:01:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:06:29.479 11:01:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:06:29.479 11:01:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:06:29.479 No valid GPT data, bailing 00:06:29.479 11:01:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:29.479 11:01:58 -- scripts/common.sh@394 -- # pt= 00:06:29.479 11:01:58 -- scripts/common.sh@395 -- # return 1 00:06:29.479 11:01:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:06:29.479 1+0 records in 00:06:29.479 1+0 records out 00:06:29.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642238 s, 163 MB/s 00:06:29.479 11:01:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:29.479 11:01:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:29.479 11:01:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:06:29.479 11:01:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:06:29.479 11:01:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:06:29.479 No valid GPT data, bailing 00:06:29.479 11:01:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:29.479 11:01:58 -- scripts/common.sh@394 -- # pt= 00:06:29.479 11:01:58 -- scripts/common.sh@395 -- # return 1 00:06:29.479 11:01:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:06:29.479 1+0 records in 00:06:29.479 1+0 records out 00:06:29.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585209 s, 179 MB/s 00:06:29.479 11:01:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:29.479 11:01:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:29.479 11:01:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:29.479 11:01:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:29.479 11:01:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:29.479 No valid GPT data, bailing 00:06:29.479 11:01:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:29.741 11:01:58 -- scripts/common.sh@394 -- # pt= 00:06:29.741 11:01:58 -- scripts/common.sh@395 -- # return 1 00:06:29.741 11:01:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:29.741 1+0 records in 00:06:29.741 1+0 records out 00:06:29.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309411 s, 33.9 MB/s 00:06:29.741 11:01:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:29.741 11:01:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:29.741 11:01:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:29.741 11:01:58 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:29.741 11:01:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:29.741 No valid GPT data, bailing 00:06:29.741 11:01:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:29.741 11:01:58 -- scripts/common.sh@394 -- # pt= 00:06:29.741 11:01:58 -- scripts/common.sh@395 -- # return 1 00:06:29.741 11:01:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:29.741 1+0 records in 00:06:29.741 1+0 records out 00:06:29.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616844 s, 170 MB/s 00:06:29.741 11:01:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:29.741 11:01:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:29.741 11:01:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:29.741 11:01:58 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:29.741 11:01:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:29.741 No valid GPT data, bailing 00:06:29.741 11:01:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:29.741 11:01:58 -- scripts/common.sh@394 -- # pt= 00:06:29.741 11:01:58 -- scripts/common.sh@395 -- # return 1 00:06:29.741 11:01:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:29.741 1+0 records in 00:06:29.741 1+0 records out 00:06:29.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060822 s, 172 MB/s 00:06:29.742 11:01:58 -- spdk/autotest.sh@105 -- # sync 00:06:30.002 11:01:58 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:30.002 11:01:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:30.002 11:01:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:31.916 11:02:00 -- spdk/autotest.sh@111 -- # uname -s 00:06:31.916 11:02:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:31.916 11:02:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:31.916 11:02:00 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:32.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.436 Hugepages 00:06:32.436 node hugesize free / total 00:06:32.436 node0 1048576kB 0 / 0 00:06:32.436 node0 2048kB 0 / 0 00:06:32.436 00:06:32.436 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:32.698 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:32.698 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:32.698 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:32.698 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:32.959 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:06:32.959 11:02:01 -- spdk/autotest.sh@117 -- # uname -s 00:06:32.959 11:02:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:32.959 11:02:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:32.959 11:02:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:33.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.096 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.097 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.097 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.097 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.097 11:02:02 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:35.050 11:02:03 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:35.050 11:02:03 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:35.050 11:02:03 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:35.050 11:02:03 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:35.050 11:02:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:35.050 11:02:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:35.050 11:02:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:35.050 11:02:03 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.050 11:02:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:35.336 11:02:03 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:35.336 11:02:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:35.336 11:02:03 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.598 Waiting for block devices as requested 00:06:35.598 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.859 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.859 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.859 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:41.156 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:41.156 11:02:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:41.156 11:02:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:41.156 11:02:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:41.156 11:02:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:41.156 11:02:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:41.156 11:02:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:41.156 11:02:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:41.156 11:02:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:41.156 11:02:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1541 -- # continue 00:06:41.156 11:02:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:41.156 11:02:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:41.156 11:02:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:41.156 11:02:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:41.156 11:02:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1541 -- # continue 00:06:41.156 11:02:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:41.156 11:02:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:06:41.156 11:02:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:41.156 11:02:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:41.156 11:02:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:41.156 11:02:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:41.156 11:02:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:41.157 11:02:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:41.157 11:02:09 -- common/autotest_common.sh@1541 -- # continue 00:06:41.157 11:02:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:41.157 11:02:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:41.157 11:02:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:06:41.157 11:02:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:41.157 11:02:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:06:41.157 11:02:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:41.157 11:02:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:41.157 11:02:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:41.157 11:02:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:41.157 11:02:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:41.157 11:02:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:41.157 11:02:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:41.157 11:02:09 -- common/autotest_common.sh@1541 -- # continue 00:06:41.157 11:02:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:41.157 11:02:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.157 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:41.157 11:02:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:41.157 11:02:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.157 11:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:41.157 11:02:09 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:41.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.304 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.304 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.304 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.304 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.304 11:02:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:42.304 11:02:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.304 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:42.566 11:02:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:42.566 11:02:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:42.567 11:02:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:42.567 11:02:11 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:42.567 11:02:11 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:42.567 11:02:11 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:42.567 11:02:11 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:42.567 11:02:11 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:42.567 11:02:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:42.567 11:02:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:42.567 11:02:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:42.567 11:02:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:42.567 11:02:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:42.567 11:02:11 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:06:42.567 11:02:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:42.567 11:02:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:42.567 11:02:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.567 11:02:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:42.567 11:02:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.567 11:02:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:42.567 11:02:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.567 11:02:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:42.567 11:02:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:42.567 11:02:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.567 11:02:11 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:42.567 11:02:11 -- common/autotest_common.sh@1570 -- # return 0 00:06:42.567 11:02:11 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:42.567 11:02:11 -- common/autotest_common.sh@1578 -- # return 0 00:06:42.567 11:02:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:42.567 11:02:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:42.567 11:02:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:42.567 11:02:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:42.567 11:02:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:42.567 11:02:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.567 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:42.567 11:02:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:42.567 11:02:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:42.567 11:02:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.567 11:02:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.567 11:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:42.567 ************************************ 00:06:42.567 START TEST env 00:06:42.567 ************************************ 00:06:42.567 11:02:11 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:42.567 * Looking for test storage... 00:06:42.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:42.567 11:02:11 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.567 11:02:11 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.567 11:02:11 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.829 11:02:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.829 11:02:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.829 11:02:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.829 11:02:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.829 11:02:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.829 11:02:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.829 11:02:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.829 11:02:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.829 11:02:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.829 11:02:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.829 11:02:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.829 11:02:11 env -- scripts/common.sh@344 -- # case "$op" in 00:06:42.829 11:02:11 env -- scripts/common.sh@345 -- # : 1 00:06:42.829 11:02:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.829 11:02:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.829 11:02:11 env -- scripts/common.sh@365 -- # decimal 1 00:06:42.829 11:02:11 env -- scripts/common.sh@353 -- # local d=1 00:06:42.829 11:02:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.829 11:02:11 env -- scripts/common.sh@355 -- # echo 1 00:06:42.829 11:02:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.829 11:02:11 env -- scripts/common.sh@366 -- # decimal 2 00:06:42.829 11:02:11 env -- scripts/common.sh@353 -- # local d=2 00:06:42.829 11:02:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.829 11:02:11 env -- scripts/common.sh@355 -- # echo 2 00:06:42.829 11:02:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.829 11:02:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.829 11:02:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.829 11:02:11 env -- scripts/common.sh@368 -- # return 0 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.829 --rc genhtml_branch_coverage=1 00:06:42.829 --rc genhtml_function_coverage=1 00:06:42.829 --rc genhtml_legend=1 00:06:42.829 --rc geninfo_all_blocks=1 00:06:42.829 --rc geninfo_unexecuted_blocks=1 00:06:42.829 00:06:42.829 ' 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.829 --rc genhtml_branch_coverage=1 00:06:42.829 --rc genhtml_function_coverage=1 00:06:42.829 --rc genhtml_legend=1 00:06:42.829 --rc geninfo_all_blocks=1 00:06:42.829 --rc geninfo_unexecuted_blocks=1 00:06:42.829 00:06:42.829 ' 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.829 --rc genhtml_branch_coverage=1 00:06:42.829 --rc genhtml_function_coverage=1 00:06:42.829 --rc genhtml_legend=1 00:06:42.829 --rc geninfo_all_blocks=1 00:06:42.829 --rc geninfo_unexecuted_blocks=1 00:06:42.829 00:06:42.829 ' 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.829 --rc genhtml_branch_coverage=1 00:06:42.829 --rc genhtml_function_coverage=1 00:06:42.829 --rc genhtml_legend=1 00:06:42.829 --rc geninfo_all_blocks=1 00:06:42.829 --rc geninfo_unexecuted_blocks=1 00:06:42.829 00:06:42.829 ' 00:06:42.829 11:02:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.829 11:02:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.830 11:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.830 ************************************ 00:06:42.830 START TEST env_memory 00:06:42.830 ************************************ 00:06:42.830 11:02:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:42.830 00:06:42.830 00:06:42.830 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.830 http://cunit.sourceforge.net/ 00:06:42.830 00:06:42.830 00:06:42.830 Suite: memory 00:06:42.830 Test: alloc and free memory map ...[2024-11-27 11:02:11.548810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:42.830 passed 00:06:42.830 Test: mem map translation ...[2024-11-27 11:02:11.587764] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:42.830 [2024-11-27 11:02:11.587824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:42.830 [2024-11-27 11:02:11.587896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:42.830 [2024-11-27 11:02:11.587914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:42.830 passed 00:06:42.830 Test: mem map registration ...[2024-11-27 11:02:11.656212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:42.830 [2024-11-27 11:02:11.656264] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:42.830 passed 00:06:43.094 Test: mem map adjacent registrations ...passed 00:06:43.094 00:06:43.094 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.094 suites 1 1 n/a 0 0 00:06:43.094 tests 4 4 4 0 0 00:06:43.094 asserts 152 152 152 0 n/a 00:06:43.094 00:06:43.094 Elapsed time = 0.233 seconds 00:06:43.094 00:06:43.094 real 0m0.269s 00:06:43.094 user 0m0.242s 00:06:43.094 sys 0m0.018s 00:06:43.094 11:02:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.094 ************************************ 00:06:43.094 END TEST env_memory 00:06:43.094 ************************************ 00:06:43.094 11:02:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:43.094 11:02:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:43.094 11:02:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.094 11:02:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.094 11:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.094 ************************************ 00:06:43.094 START TEST env_vtophys 00:06:43.094 ************************************ 00:06:43.094 11:02:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:43.094 EAL: lib.eal log level changed from notice to debug 00:06:43.094 EAL: Detected lcore 0 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 1 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 2 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 3 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 4 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 5 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 6 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 7 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 8 as core 0 on socket 0 00:06:43.094 EAL: Detected lcore 9 as core 0 on socket 0 00:06:43.094 EAL: Maximum logical cores by configuration: 128 00:06:43.094 EAL: Detected CPU lcores: 10 00:06:43.094 EAL: Detected NUMA nodes: 1 00:06:43.094 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:43.094 EAL: Detected shared linkage of DPDK 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:43.094 EAL: Registered [vdev] bus. 00:06:43.094 EAL: bus.vdev log level changed from disabled to notice 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:43.094 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:43.094 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:43.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:43.094 EAL: No shared files mode enabled, IPC will be disabled 00:06:43.094 EAL: No shared files mode enabled, IPC is disabled 00:06:43.094 EAL: Selected IOVA mode 'PA' 00:06:43.094 EAL: Probing VFIO support... 00:06:43.094 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:43.094 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:43.094 EAL: Ask a virtual area of 0x2e000 bytes 00:06:43.094 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:43.094 EAL: Setting up physically contiguous memory... 00:06:43.094 EAL: Setting maximum number of open files to 524288 00:06:43.094 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:43.094 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:43.094 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.094 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:43.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.094 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.094 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:43.094 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:43.094 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.094 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:43.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.094 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.094 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:43.094 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:43.094 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.094 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:43.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.095 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:43.095 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:43.095 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.095 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:43.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.095 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.095 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:43.095 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:43.095 EAL: Hugepages will be freed exactly as allocated. 00:06:43.095 EAL: No shared files mode enabled, IPC is disabled 00:06:43.095 EAL: No shared files mode enabled, IPC is disabled 00:06:43.358 EAL: TSC frequency is ~2600000 KHz 00:06:43.358 EAL: Main lcore 0 is ready (tid=7f6a5d617a40;cpuset=[0]) 00:06:43.358 EAL: Trying to obtain current memory policy. 00:06:43.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.358 EAL: Restoring previous memory policy: 0 00:06:43.358 EAL: request: mp_malloc_sync 00:06:43.358 EAL: No shared files mode enabled, IPC is disabled 00:06:43.358 EAL: Heap on socket 0 was expanded by 2MB 00:06:43.358 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:43.358 EAL: No shared files mode enabled, IPC is disabled 00:06:43.358 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:43.358 EAL: Mem event callback 'spdk:(nil)' registered 00:06:43.358 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:43.358 00:06:43.358 00:06:43.358 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.358 http://cunit.sourceforge.net/ 00:06:43.358 00:06:43.358 00:06:43.358 Suite: components_suite 00:06:43.620 Test: vtophys_malloc_test ...passed 00:06:43.620 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:43.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.620 EAL: Restoring previous memory policy: 4 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was expanded by 4MB 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was shrunk by 4MB 00:06:43.620 EAL: Trying to obtain current memory policy. 00:06:43.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.620 EAL: Restoring previous memory policy: 4 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was expanded by 6MB 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was shrunk by 6MB 00:06:43.620 EAL: Trying to obtain current memory policy. 00:06:43.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.620 EAL: Restoring previous memory policy: 4 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was expanded by 10MB 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was shrunk by 10MB 00:06:43.620 EAL: Trying to obtain current memory policy. 00:06:43.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.620 EAL: Restoring previous memory policy: 4 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was expanded by 18MB 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was shrunk by 18MB 00:06:43.620 EAL: Trying to obtain current memory policy. 00:06:43.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.620 EAL: Restoring previous memory policy: 4 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was expanded by 34MB 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.620 EAL: No shared files mode enabled, IPC is disabled 00:06:43.620 EAL: Heap on socket 0 was shrunk by 34MB 00:06:43.620 EAL: Trying to obtain current memory policy. 00:06:43.620 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.620 EAL: Restoring previous memory policy: 4 00:06:43.620 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.620 EAL: request: mp_malloc_sync 00:06:43.621 EAL: No shared files mode enabled, IPC is disabled 00:06:43.621 EAL: Heap on socket 0 was expanded by 66MB 00:06:43.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.621 EAL: request: mp_malloc_sync 00:06:43.621 EAL: No shared files mode enabled, IPC is disabled 00:06:43.621 EAL: Heap on socket 0 was shrunk by 66MB 00:06:43.621 EAL: Trying to obtain current memory policy. 00:06:43.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.621 EAL: Restoring previous memory policy: 4 00:06:43.621 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.621 EAL: request: mp_malloc_sync 00:06:43.621 EAL: No shared files mode enabled, IPC is disabled 00:06:43.621 EAL: Heap on socket 0 was expanded by 130MB 00:06:43.882 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.882 EAL: request: mp_malloc_sync 00:06:43.882 EAL: No shared files mode enabled, IPC is disabled 00:06:43.882 EAL: Heap on socket 0 was shrunk by 130MB 00:06:43.882 EAL: Trying to obtain current memory policy. 00:06:43.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.882 EAL: Restoring previous memory policy: 4 00:06:43.882 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.882 EAL: request: mp_malloc_sync 00:06:43.882 EAL: No shared files mode enabled, IPC is disabled 00:06:43.882 EAL: Heap on socket 0 was expanded by 258MB 00:06:43.882 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.882 EAL: request: mp_malloc_sync 00:06:43.882 EAL: No shared files mode enabled, IPC is disabled 00:06:43.882 EAL: Heap on socket 0 was shrunk by 258MB 00:06:43.882 EAL: Trying to obtain current memory policy. 00:06:43.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.166 EAL: Restoring previous memory policy: 4 00:06:44.166 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.166 EAL: request: mp_malloc_sync 00:06:44.166 EAL: No shared files mode enabled, IPC is disabled 00:06:44.166 EAL: Heap on socket 0 was expanded by 514MB 00:06:44.166 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.166 EAL: request: mp_malloc_sync 00:06:44.166 EAL: No shared files mode enabled, IPC is disabled 00:06:44.166 EAL: Heap on socket 0 was shrunk by 514MB 00:06:44.166 EAL: Trying to obtain current memory policy. 00:06:44.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.428 EAL: Restoring previous memory policy: 4 00:06:44.428 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.428 EAL: request: mp_malloc_sync 00:06:44.428 EAL: No shared files mode enabled, IPC is disabled 00:06:44.428 EAL: Heap on socket 0 was expanded by 1026MB 00:06:44.690 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.951 passed 00:06:44.951 00:06:44.951 EAL: request: mp_malloc_sync 00:06:44.951 EAL: No shared files mode enabled, IPC is disabled 00:06:44.951 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:44.951 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.951 suites 1 1 n/a 0 0 00:06:44.951 tests 2 2 2 0 0 00:06:44.951 asserts 5841 5841 5841 0 n/a 00:06:44.951 00:06:44.951 Elapsed time = 1.559 seconds 00:06:44.951 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.951 EAL: request: mp_malloc_sync 00:06:44.951 EAL: No shared files mode enabled, IPC is disabled 00:06:44.951 EAL: Heap on socket 0 was shrunk by 2MB 00:06:44.951 EAL: No shared files mode enabled, IPC is disabled 00:06:44.951 EAL: No shared files mode enabled, IPC is disabled 00:06:44.951 EAL: No shared files mode enabled, IPC is disabled 00:06:44.951 00:06:44.951 real 0m1.814s 00:06:44.951 user 0m0.778s 00:06:44.951 sys 0m0.890s 00:06:44.951 11:02:13 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.952 11:02:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:44.952 ************************************ 00:06:44.952 END TEST env_vtophys 00:06:44.952 ************************************ 00:06:44.952 11:02:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:44.952 11:02:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.952 11:02:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.952 11:02:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.952 ************************************ 00:06:44.952 START TEST env_pci 00:06:44.952 ************************************ 00:06:44.952 11:02:13 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:44.952 00:06:44.952 00:06:44.952 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.952 http://cunit.sourceforge.net/ 00:06:44.952 00:06:44.952 00:06:44.952 Suite: pci 00:06:44.952 Test: pci_hook ...[2024-11-27 11:02:13.730403] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69706 has claimed it 00:06:44.952 passed 00:06:44.952 00:06:44.952 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.952 suites 1 1 n/a 0 0 00:06:44.952 tests 1 1 1 0 0 00:06:44.952 asserts 25 25 25 0 n/a 00:06:44.952 00:06:44.952 Elapsed time = 0.004 seconds 00:06:44.952 EAL: Cannot find device (10000:00:01.0) 00:06:44.952 EAL: Failed to attach device on primary process 00:06:44.952 00:06:44.952 real 0m0.064s 00:06:44.952 user 0m0.023s 00:06:44.952 sys 0m0.039s 00:06:44.952 11:02:13 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.952 ************************************ 00:06:44.952 END TEST env_pci 00:06:44.952 ************************************ 00:06:44.952 11:02:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:44.952 11:02:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:44.952 11:02:13 env -- env/env.sh@15 -- # uname 00:06:44.952 11:02:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:44.952 11:02:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:44.952 11:02:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:44.952 11:02:13 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:44.952 11:02:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.952 11:02:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.214 ************************************ 00:06:45.214 START TEST env_dpdk_post_init 00:06:45.214 ************************************ 00:06:45.214 11:02:13 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.214 EAL: Detected CPU lcores: 10 00:06:45.214 EAL: Detected NUMA nodes: 1 00:06:45.214 EAL: Detected shared linkage of DPDK 00:06:45.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.214 EAL: Selected IOVA mode 'PA' 00:06:45.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:45.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:45.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:45.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:45.214 Starting DPDK initialization... 00:06:45.214 Starting SPDK post initialization... 00:06:45.214 SPDK NVMe probe 00:06:45.214 Attaching to 0000:00:10.0 00:06:45.214 Attaching to 0000:00:11.0 00:06:45.214 Attaching to 0000:00:12.0 00:06:45.214 Attaching to 0000:00:13.0 00:06:45.214 Attached to 0000:00:10.0 00:06:45.214 Attached to 0000:00:11.0 00:06:45.214 Attached to 0000:00:13.0 00:06:45.214 Attached to 0000:00:12.0 00:06:45.214 Cleaning up... 00:06:45.214 00:06:45.214 real 0m0.242s 00:06:45.214 user 0m0.070s 00:06:45.214 sys 0m0.076s 00:06:45.214 11:02:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.214 ************************************ 00:06:45.214 END TEST env_dpdk_post_init 00:06:45.214 ************************************ 00:06:45.214 11:02:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 11:02:14 env -- env/env.sh@26 -- # uname 00:06:45.475 11:02:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.475 11:02:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.475 11:02:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.475 11:02:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.475 11:02:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 ************************************ 00:06:45.475 START TEST env_mem_callbacks 00:06:45.475 ************************************ 00:06:45.475 11:02:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.475 EAL: Detected CPU lcores: 10 00:06:45.475 EAL: Detected NUMA nodes: 1 00:06:45.475 EAL: Detected shared linkage of DPDK 00:06:45.475 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.475 EAL: Selected IOVA mode 'PA' 00:06:45.475 00:06:45.475 00:06:45.475 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.475 http://cunit.sourceforge.net/ 00:06:45.475 00:06:45.475 00:06:45.475 Suite: memory 00:06:45.475 Test: test ... 00:06:45.475 register 0x200000200000 2097152 00:06:45.475 malloc 3145728 00:06:45.475 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.475 register 0x200000400000 4194304 00:06:45.475 buf 0x200000500000 len 3145728 PASSED 00:06:45.475 malloc 64 00:06:45.475 buf 0x2000004fff40 len 64 PASSED 00:06:45.475 malloc 4194304 00:06:45.475 register 0x200000800000 6291456 00:06:45.475 buf 0x200000a00000 len 4194304 PASSED 00:06:45.475 free 0x200000500000 3145728 00:06:45.475 free 0x2000004fff40 64 00:06:45.475 unregister 0x200000400000 4194304 PASSED 00:06:45.475 free 0x200000a00000 4194304 00:06:45.475 unregister 0x200000800000 6291456 PASSED 00:06:45.475 malloc 8388608 00:06:45.475 register 0x200000400000 10485760 00:06:45.475 buf 0x200000600000 len 8388608 PASSED 00:06:45.475 free 0x200000600000 8388608 00:06:45.475 unregister 0x200000400000 10485760 PASSED 00:06:45.475 passed 00:06:45.475 00:06:45.475 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.475 suites 1 1 n/a 0 0 00:06:45.475 tests 1 1 1 0 0 00:06:45.475 asserts 15 15 15 0 n/a 00:06:45.475 00:06:45.475 Elapsed time = 0.009 seconds 00:06:45.475 00:06:45.475 real 0m0.173s 00:06:45.475 user 0m0.021s 00:06:45.475 sys 0m0.050s 00:06:45.475 11:02:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.475 11:02:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:45.475 ************************************ 00:06:45.475 END TEST env_mem_callbacks 00:06:45.475 ************************************ 00:06:45.737 ************************************ 00:06:45.737 END TEST env 00:06:45.737 ************************************ 00:06:45.737 00:06:45.737 real 0m3.053s 00:06:45.737 user 0m1.300s 00:06:45.737 sys 0m1.297s 00:06:45.737 11:02:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.737 11:02:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.737 11:02:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:45.737 11:02:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.737 11:02:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.737 11:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:45.737 ************************************ 00:06:45.737 START TEST rpc 00:06:45.737 ************************************ 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:45.737 * Looking for test storage... 00:06:45.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.737 11:02:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.737 11:02:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.737 11:02:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.737 11:02:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.737 11:02:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.737 11:02:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:45.737 11:02:14 rpc -- scripts/common.sh@345 -- # : 1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.737 11:02:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.737 11:02:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@353 -- # local d=1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.737 11:02:14 rpc -- scripts/common.sh@355 -- # echo 1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.737 11:02:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@353 -- # local d=2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.737 11:02:14 rpc -- scripts/common.sh@355 -- # echo 2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.737 11:02:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.737 11:02:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.737 11:02:14 rpc -- scripts/common.sh@368 -- # return 0 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:45.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.737 --rc genhtml_branch_coverage=1 00:06:45.737 --rc genhtml_function_coverage=1 00:06:45.737 --rc genhtml_legend=1 00:06:45.737 --rc geninfo_all_blocks=1 00:06:45.737 --rc geninfo_unexecuted_blocks=1 00:06:45.737 00:06:45.737 ' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:45.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.737 --rc genhtml_branch_coverage=1 00:06:45.737 --rc genhtml_function_coverage=1 00:06:45.737 --rc genhtml_legend=1 00:06:45.737 --rc geninfo_all_blocks=1 00:06:45.737 --rc geninfo_unexecuted_blocks=1 00:06:45.737 00:06:45.737 ' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:45.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.737 --rc genhtml_branch_coverage=1 00:06:45.737 --rc genhtml_function_coverage=1 00:06:45.737 --rc genhtml_legend=1 00:06:45.737 --rc geninfo_all_blocks=1 00:06:45.737 --rc geninfo_unexecuted_blocks=1 00:06:45.737 00:06:45.737 ' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:45.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.737 --rc genhtml_branch_coverage=1 00:06:45.737 --rc genhtml_function_coverage=1 00:06:45.737 --rc genhtml_legend=1 00:06:45.737 --rc geninfo_all_blocks=1 00:06:45.737 --rc geninfo_unexecuted_blocks=1 00:06:45.737 00:06:45.737 ' 00:06:45.737 11:02:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69828 00:06:45.737 11:02:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.737 11:02:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:45.737 11:02:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69828 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@831 -- # '[' -z 69828 ']' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.737 11:02:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.999 [2024-11-27 11:02:14.674111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.999 [2024-11-27 11:02:14.674444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69828 ] 00:06:45.999 [2024-11-27 11:02:14.823281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.999 [2024-11-27 11:02:14.878649] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:45.999 [2024-11-27 11:02:14.878727] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69828' to capture a snapshot of events at runtime. 00:06:45.999 [2024-11-27 11:02:14.878742] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.999 [2024-11-27 11:02:14.878751] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.999 [2024-11-27 11:02:14.878766] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69828 for offline analysis/debug. 00:06:45.999 [2024-11-27 11:02:14.878808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.943 11:02:15 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.943 11:02:15 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:46.943 11:02:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.943 11:02:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.943 11:02:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:46.943 11:02:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:46.943 11:02:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.943 11:02:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.943 11:02:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 ************************************ 00:06:46.943 START TEST rpc_integrity 00:06:46.943 ************************************ 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:46.943 { 00:06:46.943 "name": "Malloc0", 00:06:46.943 "aliases": [ 00:06:46.943 "328f4a26-01cc-477c-9bea-fea640cb0a95" 00:06:46.943 ], 00:06:46.943 "product_name": "Malloc disk", 00:06:46.943 "block_size": 512, 00:06:46.943 "num_blocks": 16384, 00:06:46.943 "uuid": "328f4a26-01cc-477c-9bea-fea640cb0a95", 00:06:46.943 "assigned_rate_limits": { 00:06:46.943 "rw_ios_per_sec": 0, 00:06:46.943 "rw_mbytes_per_sec": 0, 00:06:46.943 "r_mbytes_per_sec": 0, 00:06:46.943 "w_mbytes_per_sec": 0 00:06:46.943 }, 00:06:46.943 "claimed": false, 00:06:46.943 "zoned": false, 00:06:46.943 "supported_io_types": { 00:06:46.943 "read": true, 00:06:46.943 "write": true, 00:06:46.943 "unmap": true, 00:06:46.943 "flush": true, 00:06:46.943 "reset": true, 00:06:46.943 "nvme_admin": false, 00:06:46.943 "nvme_io": false, 00:06:46.943 "nvme_io_md": false, 00:06:46.943 "write_zeroes": true, 00:06:46.943 "zcopy": true, 00:06:46.943 "get_zone_info": false, 00:06:46.943 "zone_management": false, 00:06:46.943 "zone_append": false, 00:06:46.943 "compare": false, 00:06:46.943 "compare_and_write": false, 00:06:46.943 "abort": true, 00:06:46.943 "seek_hole": false, 00:06:46.943 "seek_data": false, 00:06:46.943 "copy": true, 00:06:46.943 "nvme_iov_md": false 00:06:46.943 }, 00:06:46.943 "memory_domains": [ 00:06:46.943 { 00:06:46.943 "dma_device_id": "system", 00:06:46.943 "dma_device_type": 1 00:06:46.943 }, 00:06:46.943 { 00:06:46.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.943 "dma_device_type": 2 00:06:46.943 } 00:06:46.943 ], 00:06:46.943 "driver_specific": {} 00:06:46.943 } 00:06:46.943 ]' 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 [2024-11-27 11:02:15.644143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:46.943 [2024-11-27 11:02:15.644223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.943 [2024-11-27 11:02:15.644254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:46.943 [2024-11-27 11:02:15.644265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.943 [2024-11-27 11:02:15.646953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.943 [2024-11-27 11:02:15.647162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:46.943 Passthru0 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:46.943 { 00:06:46.943 "name": "Malloc0", 00:06:46.943 "aliases": [ 00:06:46.943 "328f4a26-01cc-477c-9bea-fea640cb0a95" 00:06:46.943 ], 00:06:46.943 "product_name": "Malloc disk", 00:06:46.943 "block_size": 512, 00:06:46.943 "num_blocks": 16384, 00:06:46.943 "uuid": "328f4a26-01cc-477c-9bea-fea640cb0a95", 00:06:46.943 "assigned_rate_limits": { 00:06:46.943 "rw_ios_per_sec": 0, 00:06:46.943 "rw_mbytes_per_sec": 0, 00:06:46.943 "r_mbytes_per_sec": 0, 00:06:46.943 "w_mbytes_per_sec": 0 00:06:46.943 }, 00:06:46.943 "claimed": true, 00:06:46.943 "claim_type": "exclusive_write", 00:06:46.943 "zoned": false, 00:06:46.943 "supported_io_types": { 00:06:46.943 "read": true, 00:06:46.943 "write": true, 00:06:46.943 "unmap": true, 00:06:46.943 "flush": true, 00:06:46.943 "reset": true, 00:06:46.943 "nvme_admin": false, 00:06:46.943 "nvme_io": false, 00:06:46.943 "nvme_io_md": false, 00:06:46.943 "write_zeroes": true, 00:06:46.943 "zcopy": true, 00:06:46.943 "get_zone_info": false, 00:06:46.943 "zone_management": false, 00:06:46.943 "zone_append": false, 00:06:46.943 "compare": false, 00:06:46.943 "compare_and_write": false, 00:06:46.943 "abort": true, 00:06:46.943 "seek_hole": false, 00:06:46.943 "seek_data": false, 00:06:46.943 "copy": true, 00:06:46.943 "nvme_iov_md": false 00:06:46.943 }, 00:06:46.943 "memory_domains": [ 00:06:46.943 { 00:06:46.943 "dma_device_id": "system", 00:06:46.943 "dma_device_type": 1 00:06:46.943 }, 00:06:46.943 { 00:06:46.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.943 "dma_device_type": 2 00:06:46.943 } 00:06:46.943 ], 00:06:46.943 "driver_specific": {} 00:06:46.943 }, 00:06:46.943 { 00:06:46.943 "name": "Passthru0", 00:06:46.943 "aliases": [ 00:06:46.943 "f8bc99d7-64a3-5388-9ade-1d75c65f921a" 00:06:46.943 ], 00:06:46.943 "product_name": "passthru", 00:06:46.943 "block_size": 512, 00:06:46.943 "num_blocks": 16384, 00:06:46.943 "uuid": "f8bc99d7-64a3-5388-9ade-1d75c65f921a", 00:06:46.943 "assigned_rate_limits": { 00:06:46.943 "rw_ios_per_sec": 0, 00:06:46.943 "rw_mbytes_per_sec": 0, 00:06:46.943 "r_mbytes_per_sec": 0, 00:06:46.943 "w_mbytes_per_sec": 0 00:06:46.943 }, 00:06:46.943 "claimed": false, 00:06:46.943 "zoned": false, 00:06:46.943 "supported_io_types": { 00:06:46.943 "read": true, 00:06:46.943 "write": true, 00:06:46.943 "unmap": true, 00:06:46.943 "flush": true, 00:06:46.943 "reset": true, 00:06:46.943 "nvme_admin": false, 00:06:46.943 "nvme_io": false, 00:06:46.943 "nvme_io_md": false, 00:06:46.943 "write_zeroes": true, 00:06:46.943 "zcopy": true, 00:06:46.943 "get_zone_info": false, 00:06:46.943 "zone_management": false, 00:06:46.943 "zone_append": false, 00:06:46.943 "compare": false, 00:06:46.943 "compare_and_write": false, 00:06:46.943 "abort": true, 00:06:46.943 "seek_hole": false, 00:06:46.943 "seek_data": false, 00:06:46.943 "copy": true, 00:06:46.943 "nvme_iov_md": false 00:06:46.943 }, 00:06:46.943 "memory_domains": [ 00:06:46.943 { 00:06:46.943 "dma_device_id": "system", 00:06:46.943 "dma_device_type": 1 00:06:46.943 }, 00:06:46.943 { 00:06:46.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.943 "dma_device_type": 2 00:06:46.943 } 00:06:46.943 ], 00:06:46.943 "driver_specific": { 00:06:46.943 "passthru": { 00:06:46.943 "name": "Passthru0", 00:06:46.943 "base_bdev_name": "Malloc0" 00:06:46.943 } 00:06:46.943 } 00:06:46.943 } 00:06:46.943 ]' 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:46.943 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:46.944 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.944 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.944 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.944 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:46.944 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:46.944 ************************************ 00:06:46.944 END TEST rpc_integrity 00:06:46.944 ************************************ 00:06:46.944 11:02:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:46.944 00:06:46.944 real 0m0.235s 00:06:46.944 user 0m0.127s 00:06:46.944 sys 0m0.034s 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.944 11:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.944 11:02:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:46.944 11:02:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.944 11:02:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.944 11:02:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 ************************************ 00:06:47.206 START TEST rpc_plugins 00:06:47.206 ************************************ 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:47.206 { 00:06:47.206 "name": "Malloc1", 00:06:47.206 "aliases": [ 00:06:47.206 "2833bcf4-a6de-4d98-8d0f-287d838a93a6" 00:06:47.206 ], 00:06:47.206 "product_name": "Malloc disk", 00:06:47.206 "block_size": 4096, 00:06:47.206 "num_blocks": 256, 00:06:47.206 "uuid": "2833bcf4-a6de-4d98-8d0f-287d838a93a6", 00:06:47.206 "assigned_rate_limits": { 00:06:47.206 "rw_ios_per_sec": 0, 00:06:47.206 "rw_mbytes_per_sec": 0, 00:06:47.206 "r_mbytes_per_sec": 0, 00:06:47.206 "w_mbytes_per_sec": 0 00:06:47.206 }, 00:06:47.206 "claimed": false, 00:06:47.206 "zoned": false, 00:06:47.206 "supported_io_types": { 00:06:47.206 "read": true, 00:06:47.206 "write": true, 00:06:47.206 "unmap": true, 00:06:47.206 "flush": true, 00:06:47.206 "reset": true, 00:06:47.206 "nvme_admin": false, 00:06:47.206 "nvme_io": false, 00:06:47.206 "nvme_io_md": false, 00:06:47.206 "write_zeroes": true, 00:06:47.206 "zcopy": true, 00:06:47.206 "get_zone_info": false, 00:06:47.206 "zone_management": false, 00:06:47.206 "zone_append": false, 00:06:47.206 "compare": false, 00:06:47.206 "compare_and_write": false, 00:06:47.206 "abort": true, 00:06:47.206 "seek_hole": false, 00:06:47.206 "seek_data": false, 00:06:47.206 "copy": true, 00:06:47.206 "nvme_iov_md": false 00:06:47.206 }, 00:06:47.206 "memory_domains": [ 00:06:47.206 { 00:06:47.206 "dma_device_id": "system", 00:06:47.206 "dma_device_type": 1 00:06:47.206 }, 00:06:47.206 { 00:06:47.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.206 "dma_device_type": 2 00:06:47.206 } 00:06:47.206 ], 00:06:47.206 "driver_specific": {} 00:06:47.206 } 00:06:47.206 ]' 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:47.206 ************************************ 00:06:47.206 END TEST rpc_plugins 00:06:47.206 ************************************ 00:06:47.206 11:02:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:47.206 00:06:47.206 real 0m0.118s 00:06:47.206 user 0m0.059s 00:06:47.206 sys 0m0.018s 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.206 11:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 11:02:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:47.206 11:02:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.206 11:02:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.206 11:02:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 ************************************ 00:06:47.206 START TEST rpc_trace_cmd_test 00:06:47.206 ************************************ 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.206 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:47.206 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69828", 00:06:47.206 "tpoint_group_mask": "0x8", 00:06:47.206 "iscsi_conn": { 00:06:47.206 "mask": "0x2", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "scsi": { 00:06:47.206 "mask": "0x4", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "bdev": { 00:06:47.206 "mask": "0x8", 00:06:47.206 "tpoint_mask": "0xffffffffffffffff" 00:06:47.206 }, 00:06:47.206 "nvmf_rdma": { 00:06:47.206 "mask": "0x10", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "nvmf_tcp": { 00:06:47.206 "mask": "0x20", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "ftl": { 00:06:47.206 "mask": "0x40", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "blobfs": { 00:06:47.206 "mask": "0x80", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "dsa": { 00:06:47.206 "mask": "0x200", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "thread": { 00:06:47.206 "mask": "0x400", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "nvme_pcie": { 00:06:47.206 "mask": "0x800", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.206 }, 00:06:47.206 "iaa": { 00:06:47.206 "mask": "0x1000", 00:06:47.206 "tpoint_mask": "0x0" 00:06:47.207 }, 00:06:47.207 "nvme_tcp": { 00:06:47.207 "mask": "0x2000", 00:06:47.207 "tpoint_mask": "0x0" 00:06:47.207 }, 00:06:47.207 "bdev_nvme": { 00:06:47.207 "mask": "0x4000", 00:06:47.207 "tpoint_mask": "0x0" 00:06:47.207 }, 00:06:47.207 "sock": { 00:06:47.207 "mask": "0x8000", 00:06:47.207 "tpoint_mask": "0x0" 00:06:47.207 }, 00:06:47.207 "blob": { 00:06:47.207 "mask": "0x10000", 00:06:47.207 "tpoint_mask": "0x0" 00:06:47.207 }, 00:06:47.207 "bdev_raid": { 00:06:47.207 "mask": "0x20000", 00:06:47.207 "tpoint_mask": "0x0" 00:06:47.207 } 00:06:47.207 }' 00:06:47.207 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:47.207 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:47.207 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:47.469 ************************************ 00:06:47.469 END TEST rpc_trace_cmd_test 00:06:47.469 ************************************ 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:47.469 00:06:47.469 real 0m0.183s 00:06:47.469 user 0m0.138s 00:06:47.469 sys 0m0.032s 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.469 11:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.469 11:02:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:47.469 11:02:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:47.469 11:02:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:47.469 11:02:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.469 11:02:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.469 11:02:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.469 ************************************ 00:06:47.469 START TEST rpc_daemon_integrity 00:06:47.469 ************************************ 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.469 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.470 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:47.470 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.470 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.470 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.470 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.470 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.470 { 00:06:47.470 "name": "Malloc2", 00:06:47.470 "aliases": [ 00:06:47.470 "f5f398e9-35c1-4b0e-a08e-ab976d8c7df0" 00:06:47.470 ], 00:06:47.470 "product_name": "Malloc disk", 00:06:47.470 "block_size": 512, 00:06:47.470 "num_blocks": 16384, 00:06:47.470 "uuid": "f5f398e9-35c1-4b0e-a08e-ab976d8c7df0", 00:06:47.470 "assigned_rate_limits": { 00:06:47.470 "rw_ios_per_sec": 0, 00:06:47.470 "rw_mbytes_per_sec": 0, 00:06:47.470 "r_mbytes_per_sec": 0, 00:06:47.470 "w_mbytes_per_sec": 0 00:06:47.470 }, 00:06:47.470 "claimed": false, 00:06:47.470 "zoned": false, 00:06:47.470 "supported_io_types": { 00:06:47.470 "read": true, 00:06:47.470 "write": true, 00:06:47.470 "unmap": true, 00:06:47.470 "flush": true, 00:06:47.470 "reset": true, 00:06:47.470 "nvme_admin": false, 00:06:47.470 "nvme_io": false, 00:06:47.470 "nvme_io_md": false, 00:06:47.470 "write_zeroes": true, 00:06:47.470 "zcopy": true, 00:06:47.470 "get_zone_info": false, 00:06:47.470 "zone_management": false, 00:06:47.470 "zone_append": false, 00:06:47.470 "compare": false, 00:06:47.470 "compare_and_write": false, 00:06:47.470 "abort": true, 00:06:47.470 "seek_hole": false, 00:06:47.470 "seek_data": false, 00:06:47.470 "copy": true, 00:06:47.470 "nvme_iov_md": false 00:06:47.470 }, 00:06:47.470 "memory_domains": [ 00:06:47.470 { 00:06:47.470 "dma_device_id": "system", 00:06:47.470 "dma_device_type": 1 00:06:47.470 }, 00:06:47.470 { 00:06:47.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.470 "dma_device_type": 2 00:06:47.470 } 00:06:47.470 ], 00:06:47.470 "driver_specific": {} 00:06:47.470 } 00:06:47.470 ]' 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.732 [2024-11-27 11:02:16.391344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:47.732 [2024-11-27 11:02:16.391444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.732 [2024-11-27 11:02:16.391476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:47.732 [2024-11-27 11:02:16.391487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.732 [2024-11-27 11:02:16.394319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.732 [2024-11-27 11:02:16.394556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.732 Passthru0 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.732 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.732 { 00:06:47.732 "name": "Malloc2", 00:06:47.732 "aliases": [ 00:06:47.732 "f5f398e9-35c1-4b0e-a08e-ab976d8c7df0" 00:06:47.732 ], 00:06:47.732 "product_name": "Malloc disk", 00:06:47.732 "block_size": 512, 00:06:47.732 "num_blocks": 16384, 00:06:47.732 "uuid": "f5f398e9-35c1-4b0e-a08e-ab976d8c7df0", 00:06:47.732 "assigned_rate_limits": { 00:06:47.732 "rw_ios_per_sec": 0, 00:06:47.732 "rw_mbytes_per_sec": 0, 00:06:47.732 "r_mbytes_per_sec": 0, 00:06:47.732 "w_mbytes_per_sec": 0 00:06:47.732 }, 00:06:47.732 "claimed": true, 00:06:47.732 "claim_type": "exclusive_write", 00:06:47.732 "zoned": false, 00:06:47.732 "supported_io_types": { 00:06:47.732 "read": true, 00:06:47.732 "write": true, 00:06:47.732 "unmap": true, 00:06:47.732 "flush": true, 00:06:47.732 "reset": true, 00:06:47.732 "nvme_admin": false, 00:06:47.732 "nvme_io": false, 00:06:47.732 "nvme_io_md": false, 00:06:47.732 "write_zeroes": true, 00:06:47.732 "zcopy": true, 00:06:47.732 "get_zone_info": false, 00:06:47.732 "zone_management": false, 00:06:47.732 "zone_append": false, 00:06:47.732 "compare": false, 00:06:47.732 "compare_and_write": false, 00:06:47.732 "abort": true, 00:06:47.732 "seek_hole": false, 00:06:47.732 "seek_data": false, 00:06:47.732 "copy": true, 00:06:47.732 "nvme_iov_md": false 00:06:47.732 }, 00:06:47.732 "memory_domains": [ 00:06:47.732 { 00:06:47.732 "dma_device_id": "system", 00:06:47.732 "dma_device_type": 1 00:06:47.732 }, 00:06:47.732 { 00:06:47.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.732 "dma_device_type": 2 00:06:47.732 } 00:06:47.732 ], 00:06:47.732 "driver_specific": {} 00:06:47.732 }, 00:06:47.732 { 00:06:47.732 "name": "Passthru0", 00:06:47.732 "aliases": [ 00:06:47.732 "d0df8e0e-3948-5d5e-a002-7487ae1bb69c" 00:06:47.732 ], 00:06:47.732 "product_name": "passthru", 00:06:47.732 "block_size": 512, 00:06:47.732 "num_blocks": 16384, 00:06:47.732 "uuid": "d0df8e0e-3948-5d5e-a002-7487ae1bb69c", 00:06:47.732 "assigned_rate_limits": { 00:06:47.732 "rw_ios_per_sec": 0, 00:06:47.732 "rw_mbytes_per_sec": 0, 00:06:47.732 "r_mbytes_per_sec": 0, 00:06:47.732 "w_mbytes_per_sec": 0 00:06:47.732 }, 00:06:47.732 "claimed": false, 00:06:47.732 "zoned": false, 00:06:47.732 "supported_io_types": { 00:06:47.732 "read": true, 00:06:47.732 "write": true, 00:06:47.732 "unmap": true, 00:06:47.732 "flush": true, 00:06:47.732 "reset": true, 00:06:47.732 "nvme_admin": false, 00:06:47.732 "nvme_io": false, 00:06:47.732 "nvme_io_md": false, 00:06:47.732 "write_zeroes": true, 00:06:47.732 "zcopy": true, 00:06:47.732 "get_zone_info": false, 00:06:47.732 "zone_management": false, 00:06:47.732 "zone_append": false, 00:06:47.732 "compare": false, 00:06:47.732 "compare_and_write": false, 00:06:47.732 "abort": true, 00:06:47.732 "seek_hole": false, 00:06:47.732 "seek_data": false, 00:06:47.732 "copy": true, 00:06:47.733 "nvme_iov_md": false 00:06:47.733 }, 00:06:47.733 "memory_domains": [ 00:06:47.733 { 00:06:47.733 "dma_device_id": "system", 00:06:47.733 "dma_device_type": 1 00:06:47.733 }, 00:06:47.733 { 00:06:47.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.733 "dma_device_type": 2 00:06:47.733 } 00:06:47.733 ], 00:06:47.733 "driver_specific": { 00:06:47.733 "passthru": { 00:06:47.733 "name": "Passthru0", 00:06:47.733 "base_bdev_name": "Malloc2" 00:06:47.733 } 00:06:47.733 } 00:06:47.733 } 00:06:47.733 ]' 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:47.733 ************************************ 00:06:47.733 END TEST rpc_daemon_integrity 00:06:47.733 ************************************ 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.733 00:06:47.733 real 0m0.249s 00:06:47.733 user 0m0.140s 00:06:47.733 sys 0m0.034s 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.733 11:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.733 11:02:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:47.733 11:02:16 rpc -- rpc/rpc.sh@84 -- # killprocess 69828 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 69828 ']' 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@954 -- # kill -0 69828 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69828 00:06:47.733 killing process with pid 69828 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69828' 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@969 -- # kill 69828 00:06:47.733 11:02:16 rpc -- common/autotest_common.sh@974 -- # wait 69828 00:06:48.299 ************************************ 00:06:48.299 END TEST rpc 00:06:48.299 ************************************ 00:06:48.299 00:06:48.299 real 0m2.618s 00:06:48.299 user 0m2.952s 00:06:48.299 sys 0m0.731s 00:06:48.299 11:02:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.299 11:02:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 11:02:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:48.299 11:02:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.299 11:02:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.299 11:02:17 -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 ************************************ 00:06:48.299 START TEST skip_rpc 00:06:48.299 ************************************ 00:06:48.299 11:02:17 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:48.299 * Looking for test storage... 00:06:48.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:48.299 11:02:17 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.299 11:02:17 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.299 11:02:17 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.558 11:02:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.558 --rc genhtml_branch_coverage=1 00:06:48.558 --rc genhtml_function_coverage=1 00:06:48.558 --rc genhtml_legend=1 00:06:48.558 --rc geninfo_all_blocks=1 00:06:48.558 --rc geninfo_unexecuted_blocks=1 00:06:48.558 00:06:48.558 ' 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.558 --rc genhtml_branch_coverage=1 00:06:48.558 --rc genhtml_function_coverage=1 00:06:48.558 --rc genhtml_legend=1 00:06:48.558 --rc geninfo_all_blocks=1 00:06:48.558 --rc geninfo_unexecuted_blocks=1 00:06:48.558 00:06:48.558 ' 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.558 --rc genhtml_branch_coverage=1 00:06:48.558 --rc genhtml_function_coverage=1 00:06:48.558 --rc genhtml_legend=1 00:06:48.558 --rc geninfo_all_blocks=1 00:06:48.558 --rc geninfo_unexecuted_blocks=1 00:06:48.558 00:06:48.558 ' 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.558 --rc genhtml_branch_coverage=1 00:06:48.558 --rc genhtml_function_coverage=1 00:06:48.558 --rc genhtml_legend=1 00:06:48.558 --rc geninfo_all_blocks=1 00:06:48.558 --rc geninfo_unexecuted_blocks=1 00:06:48.558 00:06:48.558 ' 00:06:48.558 11:02:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.558 11:02:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:48.558 11:02:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.558 11:02:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.558 ************************************ 00:06:48.558 START TEST skip_rpc 00:06:48.558 ************************************ 00:06:48.558 11:02:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:48.558 11:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70029 00:06:48.558 11:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.558 11:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:48.558 11:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:48.558 [2024-11-27 11:02:17.311330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.558 [2024-11-27 11:02:17.311608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70029 ] 00:06:48.817 [2024-11-27 11:02:17.462190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.817 [2024-11-27 11:02:17.505164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.082 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70029 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70029 ']' 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70029 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70029 00:06:54.083 killing process with pid 70029 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70029' 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70029 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70029 00:06:54.083 ************************************ 00:06:54.083 END TEST skip_rpc 00:06:54.083 ************************************ 00:06:54.083 00:06:54.083 real 0m5.282s 00:06:54.083 user 0m4.896s 00:06:54.083 sys 0m0.286s 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.083 11:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.083 11:02:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:54.083 11:02:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.083 11:02:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.083 11:02:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.083 ************************************ 00:06:54.083 START TEST skip_rpc_with_json 00:06:54.083 ************************************ 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70117 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70117 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70117 ']' 00:06:54.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.083 11:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.083 [2024-11-27 11:02:22.647803] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.083 [2024-11-27 11:02:22.647958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70117 ] 00:06:54.083 [2024-11-27 11:02:22.796363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.083 [2024-11-27 11:02:22.839078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.662 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.662 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:54.662 11:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:54.662 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.662 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.662 [2024-11-27 11:02:23.507138] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:54.662 request: 00:06:54.662 { 00:06:54.662 "trtype": "tcp", 00:06:54.662 "method": "nvmf_get_transports", 00:06:54.662 "req_id": 1 00:06:54.662 } 00:06:54.662 Got JSON-RPC error response 00:06:54.662 response: 00:06:54.662 { 00:06:54.662 "code": -19, 00:06:54.662 "message": "No such device" 00:06:54.662 } 00:06:54.662 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.663 [2024-11-27 11:02:23.515257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.663 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.932 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.932 11:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:54.932 { 00:06:54.932 "subsystems": [ 00:06:54.932 { 00:06:54.932 "subsystem": "fsdev", 00:06:54.932 "config": [ 00:06:54.932 { 00:06:54.932 "method": "fsdev_set_opts", 00:06:54.932 "params": { 00:06:54.932 "fsdev_io_pool_size": 65535, 00:06:54.932 "fsdev_io_cache_size": 256 00:06:54.932 } 00:06:54.932 } 00:06:54.932 ] 00:06:54.932 }, 00:06:54.932 { 00:06:54.932 "subsystem": "keyring", 00:06:54.932 "config": [] 00:06:54.932 }, 00:06:54.932 { 00:06:54.932 "subsystem": "iobuf", 00:06:54.932 "config": [ 00:06:54.932 { 00:06:54.932 "method": "iobuf_set_options", 00:06:54.932 "params": { 00:06:54.932 "small_pool_count": 8192, 00:06:54.932 "large_pool_count": 1024, 00:06:54.932 "small_bufsize": 8192, 00:06:54.932 "large_bufsize": 135168 00:06:54.932 } 00:06:54.932 } 00:06:54.932 ] 00:06:54.932 }, 00:06:54.932 { 00:06:54.932 "subsystem": "sock", 00:06:54.932 "config": [ 00:06:54.932 { 00:06:54.932 "method": "sock_set_default_impl", 00:06:54.932 "params": { 00:06:54.932 "impl_name": "posix" 00:06:54.932 } 00:06:54.932 }, 00:06:54.932 { 00:06:54.932 "method": "sock_impl_set_options", 00:06:54.932 "params": { 00:06:54.932 "impl_name": "ssl", 00:06:54.932 "recv_buf_size": 4096, 00:06:54.932 "send_buf_size": 4096, 00:06:54.932 "enable_recv_pipe": true, 00:06:54.932 "enable_quickack": false, 00:06:54.932 "enable_placement_id": 0, 00:06:54.933 "enable_zerocopy_send_server": true, 00:06:54.933 "enable_zerocopy_send_client": false, 00:06:54.933 "zerocopy_threshold": 0, 00:06:54.933 "tls_version": 0, 00:06:54.933 "enable_ktls": false 00:06:54.933 } 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "method": "sock_impl_set_options", 00:06:54.933 "params": { 00:06:54.933 "impl_name": "posix", 00:06:54.933 "recv_buf_size": 2097152, 00:06:54.933 "send_buf_size": 2097152, 00:06:54.933 "enable_recv_pipe": true, 00:06:54.933 "enable_quickack": false, 00:06:54.933 "enable_placement_id": 0, 00:06:54.933 "enable_zerocopy_send_server": true, 00:06:54.933 "enable_zerocopy_send_client": false, 00:06:54.933 "zerocopy_threshold": 0, 00:06:54.933 "tls_version": 0, 00:06:54.933 "enable_ktls": false 00:06:54.933 } 00:06:54.933 } 00:06:54.933 ] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "vmd", 00:06:54.933 "config": [] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "accel", 00:06:54.933 "config": [ 00:06:54.933 { 00:06:54.933 "method": "accel_set_options", 00:06:54.933 "params": { 00:06:54.933 "small_cache_size": 128, 00:06:54.933 "large_cache_size": 16, 00:06:54.933 "task_count": 2048, 00:06:54.933 "sequence_count": 2048, 00:06:54.933 "buf_count": 2048 00:06:54.933 } 00:06:54.933 } 00:06:54.933 ] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "bdev", 00:06:54.933 "config": [ 00:06:54.933 { 00:06:54.933 "method": "bdev_set_options", 00:06:54.933 "params": { 00:06:54.933 "bdev_io_pool_size": 65535, 00:06:54.933 "bdev_io_cache_size": 256, 00:06:54.933 "bdev_auto_examine": true, 00:06:54.933 "iobuf_small_cache_size": 128, 00:06:54.933 "iobuf_large_cache_size": 16 00:06:54.933 } 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "method": "bdev_raid_set_options", 00:06:54.933 "params": { 00:06:54.933 "process_window_size_kb": 1024, 00:06:54.933 "process_max_bandwidth_mb_sec": 0 00:06:54.933 } 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "method": "bdev_iscsi_set_options", 00:06:54.933 "params": { 00:06:54.933 "timeout_sec": 30 00:06:54.933 } 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "method": "bdev_nvme_set_options", 00:06:54.933 "params": { 00:06:54.933 "action_on_timeout": "none", 00:06:54.933 "timeout_us": 0, 00:06:54.933 "timeout_admin_us": 0, 00:06:54.933 "keep_alive_timeout_ms": 10000, 00:06:54.933 "arbitration_burst": 0, 00:06:54.933 "low_priority_weight": 0, 00:06:54.933 "medium_priority_weight": 0, 00:06:54.933 "high_priority_weight": 0, 00:06:54.933 "nvme_adminq_poll_period_us": 10000, 00:06:54.933 "nvme_ioq_poll_period_us": 0, 00:06:54.933 "io_queue_requests": 0, 00:06:54.933 "delay_cmd_submit": true, 00:06:54.933 "transport_retry_count": 4, 00:06:54.933 "bdev_retry_count": 3, 00:06:54.933 "transport_ack_timeout": 0, 00:06:54.933 "ctrlr_loss_timeout_sec": 0, 00:06:54.933 "reconnect_delay_sec": 0, 00:06:54.933 "fast_io_fail_timeout_sec": 0, 00:06:54.933 "disable_auto_failback": false, 00:06:54.933 "generate_uuids": false, 00:06:54.933 "transport_tos": 0, 00:06:54.933 "nvme_error_stat": false, 00:06:54.933 "rdma_srq_size": 0, 00:06:54.933 "io_path_stat": false, 00:06:54.933 "allow_accel_sequence": false, 00:06:54.933 "rdma_max_cq_size": 0, 00:06:54.933 "rdma_cm_event_timeout_ms": 0, 00:06:54.933 "dhchap_digests": [ 00:06:54.933 "sha256", 00:06:54.933 "sha384", 00:06:54.933 "sha512" 00:06:54.933 ], 00:06:54.933 "dhchap_dhgroups": [ 00:06:54.933 "null", 00:06:54.933 "ffdhe2048", 00:06:54.933 "ffdhe3072", 00:06:54.933 "ffdhe4096", 00:06:54.933 "ffdhe6144", 00:06:54.933 "ffdhe8192" 00:06:54.933 ] 00:06:54.933 } 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "method": "bdev_nvme_set_hotplug", 00:06:54.933 "params": { 00:06:54.933 "period_us": 100000, 00:06:54.933 "enable": false 00:06:54.933 } 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "method": "bdev_wait_for_examine" 00:06:54.933 } 00:06:54.933 ] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "scsi", 00:06:54.933 "config": null 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "scheduler", 00:06:54.933 "config": [ 00:06:54.933 { 00:06:54.933 "method": "framework_set_scheduler", 00:06:54.933 "params": { 00:06:54.933 "name": "static" 00:06:54.933 } 00:06:54.933 } 00:06:54.933 ] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "vhost_scsi", 00:06:54.933 "config": [] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "vhost_blk", 00:06:54.933 "config": [] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "ublk", 00:06:54.933 "config": [] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "nbd", 00:06:54.933 "config": [] 00:06:54.933 }, 00:06:54.933 { 00:06:54.933 "subsystem": "nvmf", 00:06:54.933 "config": [ 00:06:54.933 { 00:06:54.933 "method": "nvmf_set_config", 00:06:54.933 "params": { 00:06:54.934 "discovery_filter": "match_any", 00:06:54.934 "admin_cmd_passthru": { 00:06:54.934 "identify_ctrlr": false 00:06:54.934 }, 00:06:54.934 "dhchap_digests": [ 00:06:54.934 "sha256", 00:06:54.934 "sha384", 00:06:54.934 "sha512" 00:06:54.934 ], 00:06:54.934 "dhchap_dhgroups": [ 00:06:54.934 "null", 00:06:54.934 "ffdhe2048", 00:06:54.934 "ffdhe3072", 00:06:54.934 "ffdhe4096", 00:06:54.934 "ffdhe6144", 00:06:54.934 "ffdhe8192" 00:06:54.934 ] 00:06:54.934 } 00:06:54.934 }, 00:06:54.934 { 00:06:54.934 "method": "nvmf_set_max_subsystems", 00:06:54.934 "params": { 00:06:54.934 "max_subsystems": 1024 00:06:54.934 } 00:06:54.934 }, 00:06:54.934 { 00:06:54.934 "method": "nvmf_set_crdt", 00:06:54.934 "params": { 00:06:54.934 "crdt1": 0, 00:06:54.934 "crdt2": 0, 00:06:54.934 "crdt3": 0 00:06:54.934 } 00:06:54.934 }, 00:06:54.934 { 00:06:54.934 "method": "nvmf_create_transport", 00:06:54.934 "params": { 00:06:54.934 "trtype": "TCP", 00:06:54.934 "max_queue_depth": 128, 00:06:54.934 "max_io_qpairs_per_ctrlr": 127, 00:06:54.934 "in_capsule_data_size": 4096, 00:06:54.934 "max_io_size": 131072, 00:06:54.934 "io_unit_size": 131072, 00:06:54.934 "max_aq_depth": 128, 00:06:54.934 "num_shared_buffers": 511, 00:06:54.934 "buf_cache_size": 4294967295, 00:06:54.934 "dif_insert_or_strip": false, 00:06:54.934 "zcopy": false, 00:06:54.934 "c2h_success": true, 00:06:54.934 "sock_priority": 0, 00:06:54.934 "abort_timeout_sec": 1, 00:06:54.934 "ack_timeout": 0, 00:06:54.934 "data_wr_pool_size": 0 00:06:54.934 } 00:06:54.934 } 00:06:54.934 ] 00:06:54.934 }, 00:06:54.934 { 00:06:54.934 "subsystem": "iscsi", 00:06:54.934 "config": [ 00:06:54.934 { 00:06:54.934 "method": "iscsi_set_options", 00:06:54.934 "params": { 00:06:54.934 "node_base": "iqn.2016-06.io.spdk", 00:06:54.934 "max_sessions": 128, 00:06:54.934 "max_connections_per_session": 2, 00:06:54.934 "max_queue_depth": 64, 00:06:54.934 "default_time2wait": 2, 00:06:54.934 "default_time2retain": 20, 00:06:54.934 "first_burst_length": 8192, 00:06:54.934 "immediate_data": true, 00:06:54.934 "allow_duplicated_isid": false, 00:06:54.934 "error_recovery_level": 0, 00:06:54.934 "nop_timeout": 60, 00:06:54.934 "nop_in_interval": 30, 00:06:54.934 "disable_chap": false, 00:06:54.934 "require_chap": false, 00:06:54.934 "mutual_chap": false, 00:06:54.934 "chap_group": 0, 00:06:54.934 "max_large_datain_per_connection": 64, 00:06:54.934 "max_r2t_per_connection": 4, 00:06:54.934 "pdu_pool_size": 36864, 00:06:54.934 "immediate_data_pool_size": 16384, 00:06:54.934 "data_out_pool_size": 2048 00:06:54.934 } 00:06:54.934 } 00:06:54.934 ] 00:06:54.934 } 00:06:54.934 ] 00:06:54.934 } 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70117 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70117 ']' 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70117 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70117 00:06:54.934 killing process with pid 70117 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70117' 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70117 00:06:54.934 11:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70117 00:06:55.508 11:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70145 00:06:55.508 11:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:55.508 11:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70145 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70145 ']' 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70145 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70145 00:07:00.797 killing process with pid 70145 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.797 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70145' 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70145 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70145 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:00.798 ************************************ 00:07:00.798 END TEST skip_rpc_with_json 00:07:00.798 ************************************ 00:07:00.798 00:07:00.798 real 0m6.878s 00:07:00.798 user 0m6.348s 00:07:00.798 sys 0m0.763s 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.798 11:02:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:00.798 11:02:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.798 11:02:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.798 11:02:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.798 ************************************ 00:07:00.798 START TEST skip_rpc_with_delay 00:07:00.798 ************************************ 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.798 [2024-11-27 11:02:29.579945] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:00.798 [2024-11-27 11:02:29.580080] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.798 00:07:00.798 real 0m0.120s 00:07:00.798 user 0m0.066s 00:07:00.798 sys 0m0.053s 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.798 ************************************ 00:07:00.798 END TEST skip_rpc_with_delay 00:07:00.798 ************************************ 00:07:00.798 11:02:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:00.798 11:02:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:00.798 11:02:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:00.798 11:02:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:00.798 11:02:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.798 11:02:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.798 11:02:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.058 ************************************ 00:07:01.058 START TEST exit_on_failed_rpc_init 00:07:01.058 ************************************ 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70257 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70257 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70257 ']' 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.058 11:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:01.058 [2024-11-27 11:02:29.774929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.058 [2024-11-27 11:02:29.775334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70257 ] 00:07:01.058 [2024-11-27 11:02:29.925259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.317 [2024-11-27 11:02:29.968529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.883 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.884 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.884 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:01.884 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.884 [2024-11-27 11:02:30.680086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.884 [2024-11-27 11:02:30.680204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70269 ] 00:07:02.142 [2024-11-27 11:02:30.825528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.142 [2024-11-27 11:02:30.856802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.142 [2024-11-27 11:02:30.856887] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:02.142 [2024-11-27 11:02:30.856918] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:02.142 [2024-11-27 11:02:30.856932] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70257 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70257 ']' 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70257 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70257 00:07:02.142 killing process with pid 70257 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70257' 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70257 00:07:02.142 11:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70257 00:07:02.402 ************************************ 00:07:02.402 END TEST exit_on_failed_rpc_init 00:07:02.402 ************************************ 00:07:02.402 00:07:02.402 real 0m1.518s 00:07:02.402 user 0m1.642s 00:07:02.402 sys 0m0.422s 00:07:02.402 11:02:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.402 11:02:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.402 11:02:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:02.402 ************************************ 00:07:02.402 END TEST skip_rpc 00:07:02.402 ************************************ 00:07:02.402 00:07:02.402 real 0m14.173s 00:07:02.402 user 0m13.088s 00:07:02.402 sys 0m1.701s 00:07:02.402 11:02:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.402 11:02:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.663 11:02:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:02.663 11:02:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.663 11:02:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.663 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.663 ************************************ 00:07:02.663 START TEST rpc_client 00:07:02.663 ************************************ 00:07:02.663 11:02:31 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:02.663 * Looking for test storage... 00:07:02.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:02.663 11:02:31 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:02.663 11:02:31 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:07:02.663 11:02:31 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:02.663 11:02:31 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:02.663 11:02:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.664 11:02:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.664 11:02:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.664 11:02:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:02.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.664 --rc genhtml_branch_coverage=1 00:07:02.664 --rc genhtml_function_coverage=1 00:07:02.664 --rc genhtml_legend=1 00:07:02.664 --rc geninfo_all_blocks=1 00:07:02.664 --rc geninfo_unexecuted_blocks=1 00:07:02.664 00:07:02.664 ' 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:02.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.664 --rc genhtml_branch_coverage=1 00:07:02.664 --rc genhtml_function_coverage=1 00:07:02.664 --rc genhtml_legend=1 00:07:02.664 --rc geninfo_all_blocks=1 00:07:02.664 --rc geninfo_unexecuted_blocks=1 00:07:02.664 00:07:02.664 ' 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:02.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.664 --rc genhtml_branch_coverage=1 00:07:02.664 --rc genhtml_function_coverage=1 00:07:02.664 --rc genhtml_legend=1 00:07:02.664 --rc geninfo_all_blocks=1 00:07:02.664 --rc geninfo_unexecuted_blocks=1 00:07:02.664 00:07:02.664 ' 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:02.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.664 --rc genhtml_branch_coverage=1 00:07:02.664 --rc genhtml_function_coverage=1 00:07:02.664 --rc genhtml_legend=1 00:07:02.664 --rc geninfo_all_blocks=1 00:07:02.664 --rc geninfo_unexecuted_blocks=1 00:07:02.664 00:07:02.664 ' 00:07:02.664 11:02:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:02.664 OK 00:07:02.664 11:02:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:02.664 00:07:02.664 real 0m0.206s 00:07:02.664 user 0m0.117s 00:07:02.664 sys 0m0.091s 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.664 11:02:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:02.664 ************************************ 00:07:02.664 END TEST rpc_client 00:07:02.664 ************************************ 00:07:02.926 11:02:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:02.926 11:02:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.926 11:02:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.926 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.926 ************************************ 00:07:02.926 START TEST json_config 00:07:02.926 ************************************ 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.926 11:02:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.926 11:02:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.926 11:02:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.926 11:02:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.926 11:02:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.926 11:02:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:02.926 11:02:31 json_config -- scripts/common.sh@345 -- # : 1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.926 11:02:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.926 11:02:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@353 -- # local d=1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.926 11:02:31 json_config -- scripts/common.sh@355 -- # echo 1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.926 11:02:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@353 -- # local d=2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.926 11:02:31 json_config -- scripts/common.sh@355 -- # echo 2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.926 11:02:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.926 11:02:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.926 11:02:31 json_config -- scripts/common.sh@368 -- # return 0 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:02.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.926 --rc genhtml_branch_coverage=1 00:07:02.926 --rc genhtml_function_coverage=1 00:07:02.926 --rc genhtml_legend=1 00:07:02.926 --rc geninfo_all_blocks=1 00:07:02.926 --rc geninfo_unexecuted_blocks=1 00:07:02.926 00:07:02.926 ' 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:02.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.926 --rc genhtml_branch_coverage=1 00:07:02.926 --rc genhtml_function_coverage=1 00:07:02.926 --rc genhtml_legend=1 00:07:02.926 --rc geninfo_all_blocks=1 00:07:02.926 --rc geninfo_unexecuted_blocks=1 00:07:02.926 00:07:02.926 ' 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:02.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.926 --rc genhtml_branch_coverage=1 00:07:02.926 --rc genhtml_function_coverage=1 00:07:02.926 --rc genhtml_legend=1 00:07:02.926 --rc geninfo_all_blocks=1 00:07:02.926 --rc geninfo_unexecuted_blocks=1 00:07:02.926 00:07:02.926 ' 00:07:02.926 11:02:31 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:02.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.926 --rc genhtml_branch_coverage=1 00:07:02.926 --rc genhtml_function_coverage=1 00:07:02.926 --rc genhtml_legend=1 00:07:02.926 --rc geninfo_all_blocks=1 00:07:02.926 --rc geninfo_unexecuted_blocks=1 00:07:02.926 00:07:02.926 ' 00:07:02.926 11:02:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b2b8603c-1fc7-4e5b-8078-2e6f24a83076 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b2b8603c-1fc7-4e5b-8078-2e6f24a83076 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.926 11:02:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.926 11:02:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.926 11:02:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.927 11:02:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.927 11:02:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.927 11:02:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.927 11:02:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.927 11:02:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.927 11:02:31 json_config -- paths/export.sh@5 -- # export PATH 00:07:02.927 11:02:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@51 -- # : 0 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.927 11:02:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:02.927 WARNING: No tests are enabled so not running JSON configuration tests 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:02.927 11:02:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:02.927 00:07:02.927 real 0m0.156s 00:07:02.927 user 0m0.097s 00:07:02.927 sys 0m0.058s 00:07:02.927 11:02:31 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.927 11:02:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.927 ************************************ 00:07:02.927 END TEST json_config 00:07:02.927 ************************************ 00:07:02.927 11:02:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:02.927 11:02:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.927 11:02:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.927 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:07:03.189 ************************************ 00:07:03.189 START TEST json_config_extra_key 00:07:03.189 ************************************ 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.189 --rc genhtml_branch_coverage=1 00:07:03.189 --rc genhtml_function_coverage=1 00:07:03.189 --rc genhtml_legend=1 00:07:03.189 --rc geninfo_all_blocks=1 00:07:03.189 --rc geninfo_unexecuted_blocks=1 00:07:03.189 00:07:03.189 ' 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.189 --rc genhtml_branch_coverage=1 00:07:03.189 --rc genhtml_function_coverage=1 00:07:03.189 --rc genhtml_legend=1 00:07:03.189 --rc geninfo_all_blocks=1 00:07:03.189 --rc geninfo_unexecuted_blocks=1 00:07:03.189 00:07:03.189 ' 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.189 --rc genhtml_branch_coverage=1 00:07:03.189 --rc genhtml_function_coverage=1 00:07:03.189 --rc genhtml_legend=1 00:07:03.189 --rc geninfo_all_blocks=1 00:07:03.189 --rc geninfo_unexecuted_blocks=1 00:07:03.189 00:07:03.189 ' 00:07:03.189 11:02:31 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.189 --rc genhtml_branch_coverage=1 00:07:03.189 --rc genhtml_function_coverage=1 00:07:03.189 --rc genhtml_legend=1 00:07:03.189 --rc geninfo_all_blocks=1 00:07:03.189 --rc geninfo_unexecuted_blocks=1 00:07:03.189 00:07:03.189 ' 00:07:03.189 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b2b8603c-1fc7-4e5b-8078-2e6f24a83076 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b2b8603c-1fc7-4e5b-8078-2e6f24a83076 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.189 11:02:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.189 11:02:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.189 11:02:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.189 11:02:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.189 11:02:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.189 11:02:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:03.189 11:02:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.190 11:02:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:03.190 INFO: launching applications... 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:03.190 11:02:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70452 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:03.190 Waiting for target to run... 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70452 /var/tmp/spdk_tgt.sock 00:07:03.190 11:02:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:03.190 11:02:31 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70452 ']' 00:07:03.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:03.190 11:02:31 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:03.190 11:02:31 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.190 11:02:31 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:03.190 11:02:31 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.190 11:02:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:03.190 [2024-11-27 11:02:32.042837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.190 [2024-11-27 11:02:32.043235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70452 ] 00:07:03.760 [2024-11-27 11:02:32.439643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.760 [2024-11-27 11:02:32.464861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.028 11:02:32 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.028 11:02:32 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:04.028 00:07:04.028 INFO: shutting down applications... 00:07:04.028 11:02:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:04.028 11:02:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70452 ]] 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70452 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70452 00:07:04.028 11:02:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70452 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:04.601 11:02:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:04.601 SPDK target shutdown done 00:07:04.601 11:02:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:04.601 Success 00:07:04.601 ************************************ 00:07:04.601 END TEST json_config_extra_key 00:07:04.601 ************************************ 00:07:04.601 00:07:04.601 real 0m1.587s 00:07:04.601 user 0m1.322s 00:07:04.601 sys 0m0.483s 00:07:04.601 11:02:33 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.601 11:02:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:04.601 11:02:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:04.601 11:02:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.601 11:02:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.601 11:02:33 -- common/autotest_common.sh@10 -- # set +x 00:07:04.601 ************************************ 00:07:04.601 START TEST alias_rpc 00:07:04.601 ************************************ 00:07:04.601 11:02:33 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:04.860 * Looking for test storage... 00:07:04.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:04.860 11:02:33 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.860 11:02:33 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.860 11:02:33 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.860 11:02:33 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.860 11:02:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.861 11:02:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.861 --rc genhtml_branch_coverage=1 00:07:04.861 --rc genhtml_function_coverage=1 00:07:04.861 --rc genhtml_legend=1 00:07:04.861 --rc geninfo_all_blocks=1 00:07:04.861 --rc geninfo_unexecuted_blocks=1 00:07:04.861 00:07:04.861 ' 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.861 --rc genhtml_branch_coverage=1 00:07:04.861 --rc genhtml_function_coverage=1 00:07:04.861 --rc genhtml_legend=1 00:07:04.861 --rc geninfo_all_blocks=1 00:07:04.861 --rc geninfo_unexecuted_blocks=1 00:07:04.861 00:07:04.861 ' 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.861 --rc genhtml_branch_coverage=1 00:07:04.861 --rc genhtml_function_coverage=1 00:07:04.861 --rc genhtml_legend=1 00:07:04.861 --rc geninfo_all_blocks=1 00:07:04.861 --rc geninfo_unexecuted_blocks=1 00:07:04.861 00:07:04.861 ' 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.861 --rc genhtml_branch_coverage=1 00:07:04.861 --rc genhtml_function_coverage=1 00:07:04.861 --rc genhtml_legend=1 00:07:04.861 --rc geninfo_all_blocks=1 00:07:04.861 --rc geninfo_unexecuted_blocks=1 00:07:04.861 00:07:04.861 ' 00:07:04.861 11:02:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.861 11:02:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70525 00:07:04.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.861 11:02:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70525 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70525 ']' 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.861 11:02:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.861 11:02:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.861 [2024-11-27 11:02:33.665976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.861 [2024-11-27 11:02:33.666092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70525 ] 00:07:05.119 [2024-11-27 11:02:33.813649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.119 [2024-11-27 11:02:33.844430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.685 11:02:34 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.685 11:02:34 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.685 11:02:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:05.943 11:02:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70525 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70525 ']' 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70525 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70525 00:07:05.943 killing process with pid 70525 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70525' 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@969 -- # kill 70525 00:07:05.943 11:02:34 alias_rpc -- common/autotest_common.sh@974 -- # wait 70525 00:07:06.204 ************************************ 00:07:06.204 END TEST alias_rpc 00:07:06.204 ************************************ 00:07:06.204 00:07:06.204 real 0m1.556s 00:07:06.204 user 0m1.647s 00:07:06.204 sys 0m0.394s 00:07:06.204 11:02:34 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.204 11:02:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.204 11:02:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:06.204 11:02:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:06.204 11:02:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.204 11:02:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.204 11:02:35 -- common/autotest_common.sh@10 -- # set +x 00:07:06.204 ************************************ 00:07:06.204 START TEST spdkcli_tcp 00:07:06.204 ************************************ 00:07:06.204 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:06.466 * Looking for test storage... 00:07:06.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.466 11:02:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.466 --rc genhtml_branch_coverage=1 00:07:06.466 --rc genhtml_function_coverage=1 00:07:06.466 --rc genhtml_legend=1 00:07:06.466 --rc geninfo_all_blocks=1 00:07:06.466 --rc geninfo_unexecuted_blocks=1 00:07:06.466 00:07:06.466 ' 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.466 --rc genhtml_branch_coverage=1 00:07:06.466 --rc genhtml_function_coverage=1 00:07:06.466 --rc genhtml_legend=1 00:07:06.466 --rc geninfo_all_blocks=1 00:07:06.466 --rc geninfo_unexecuted_blocks=1 00:07:06.466 00:07:06.466 ' 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.466 --rc genhtml_branch_coverage=1 00:07:06.466 --rc genhtml_function_coverage=1 00:07:06.466 --rc genhtml_legend=1 00:07:06.466 --rc geninfo_all_blocks=1 00:07:06.466 --rc geninfo_unexecuted_blocks=1 00:07:06.466 00:07:06.466 ' 00:07:06.466 11:02:35 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.466 --rc genhtml_branch_coverage=1 00:07:06.467 --rc genhtml_function_coverage=1 00:07:06.467 --rc genhtml_legend=1 00:07:06.467 --rc geninfo_all_blocks=1 00:07:06.467 --rc geninfo_unexecuted_blocks=1 00:07:06.467 00:07:06.467 ' 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70605 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70605 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70605 ']' 00:07:06.467 11:02:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.467 11:02:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.467 [2024-11-27 11:02:35.270162] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.467 [2024-11-27 11:02:35.270290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70605 ] 00:07:06.726 [2024-11-27 11:02:35.419198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.726 [2024-11-27 11:02:35.454064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.726 [2024-11-27 11:02:35.454104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.292 11:02:36 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.292 11:02:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:07.292 11:02:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70622 00:07:07.292 11:02:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:07.292 11:02:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:07.552 [ 00:07:07.552 "bdev_malloc_delete", 00:07:07.552 "bdev_malloc_create", 00:07:07.552 "bdev_null_resize", 00:07:07.552 "bdev_null_delete", 00:07:07.552 "bdev_null_create", 00:07:07.552 "bdev_nvme_cuse_unregister", 00:07:07.552 "bdev_nvme_cuse_register", 00:07:07.552 "bdev_opal_new_user", 00:07:07.552 "bdev_opal_set_lock_state", 00:07:07.552 "bdev_opal_delete", 00:07:07.552 "bdev_opal_get_info", 00:07:07.552 "bdev_opal_create", 00:07:07.552 "bdev_nvme_opal_revert", 00:07:07.552 "bdev_nvme_opal_init", 00:07:07.552 "bdev_nvme_send_cmd", 00:07:07.552 "bdev_nvme_set_keys", 00:07:07.552 "bdev_nvme_get_path_iostat", 00:07:07.552 "bdev_nvme_get_mdns_discovery_info", 00:07:07.552 "bdev_nvme_stop_mdns_discovery", 00:07:07.552 "bdev_nvme_start_mdns_discovery", 00:07:07.552 "bdev_nvme_set_multipath_policy", 00:07:07.552 "bdev_nvme_set_preferred_path", 00:07:07.552 "bdev_nvme_get_io_paths", 00:07:07.552 "bdev_nvme_remove_error_injection", 00:07:07.552 "bdev_nvme_add_error_injection", 00:07:07.552 "bdev_nvme_get_discovery_info", 00:07:07.552 "bdev_nvme_stop_discovery", 00:07:07.552 "bdev_nvme_start_discovery", 00:07:07.552 "bdev_nvme_get_controller_health_info", 00:07:07.552 "bdev_nvme_disable_controller", 00:07:07.552 "bdev_nvme_enable_controller", 00:07:07.552 "bdev_nvme_reset_controller", 00:07:07.552 "bdev_nvme_get_transport_statistics", 00:07:07.552 "bdev_nvme_apply_firmware", 00:07:07.552 "bdev_nvme_detach_controller", 00:07:07.552 "bdev_nvme_get_controllers", 00:07:07.552 "bdev_nvme_attach_controller", 00:07:07.552 "bdev_nvme_set_hotplug", 00:07:07.552 "bdev_nvme_set_options", 00:07:07.552 "bdev_passthru_delete", 00:07:07.552 "bdev_passthru_create", 00:07:07.552 "bdev_lvol_set_parent_bdev", 00:07:07.552 "bdev_lvol_set_parent", 00:07:07.552 "bdev_lvol_check_shallow_copy", 00:07:07.552 "bdev_lvol_start_shallow_copy", 00:07:07.552 "bdev_lvol_grow_lvstore", 00:07:07.552 "bdev_lvol_get_lvols", 00:07:07.552 "bdev_lvol_get_lvstores", 00:07:07.552 "bdev_lvol_delete", 00:07:07.552 "bdev_lvol_set_read_only", 00:07:07.552 "bdev_lvol_resize", 00:07:07.552 "bdev_lvol_decouple_parent", 00:07:07.552 "bdev_lvol_inflate", 00:07:07.552 "bdev_lvol_rename", 00:07:07.552 "bdev_lvol_clone_bdev", 00:07:07.552 "bdev_lvol_clone", 00:07:07.552 "bdev_lvol_snapshot", 00:07:07.552 "bdev_lvol_create", 00:07:07.552 "bdev_lvol_delete_lvstore", 00:07:07.552 "bdev_lvol_rename_lvstore", 00:07:07.552 "bdev_lvol_create_lvstore", 00:07:07.552 "bdev_raid_set_options", 00:07:07.553 "bdev_raid_remove_base_bdev", 00:07:07.553 "bdev_raid_add_base_bdev", 00:07:07.553 "bdev_raid_delete", 00:07:07.553 "bdev_raid_create", 00:07:07.553 "bdev_raid_get_bdevs", 00:07:07.553 "bdev_error_inject_error", 00:07:07.553 "bdev_error_delete", 00:07:07.553 "bdev_error_create", 00:07:07.553 "bdev_split_delete", 00:07:07.553 "bdev_split_create", 00:07:07.553 "bdev_delay_delete", 00:07:07.553 "bdev_delay_create", 00:07:07.553 "bdev_delay_update_latency", 00:07:07.553 "bdev_zone_block_delete", 00:07:07.553 "bdev_zone_block_create", 00:07:07.553 "blobfs_create", 00:07:07.553 "blobfs_detect", 00:07:07.553 "blobfs_set_cache_size", 00:07:07.553 "bdev_xnvme_delete", 00:07:07.553 "bdev_xnvme_create", 00:07:07.553 "bdev_aio_delete", 00:07:07.553 "bdev_aio_rescan", 00:07:07.553 "bdev_aio_create", 00:07:07.553 "bdev_ftl_set_property", 00:07:07.553 "bdev_ftl_get_properties", 00:07:07.553 "bdev_ftl_get_stats", 00:07:07.553 "bdev_ftl_unmap", 00:07:07.553 "bdev_ftl_unload", 00:07:07.553 "bdev_ftl_delete", 00:07:07.553 "bdev_ftl_load", 00:07:07.553 "bdev_ftl_create", 00:07:07.553 "bdev_virtio_attach_controller", 00:07:07.553 "bdev_virtio_scsi_get_devices", 00:07:07.553 "bdev_virtio_detach_controller", 00:07:07.553 "bdev_virtio_blk_set_hotplug", 00:07:07.553 "bdev_iscsi_delete", 00:07:07.553 "bdev_iscsi_create", 00:07:07.553 "bdev_iscsi_set_options", 00:07:07.553 "accel_error_inject_error", 00:07:07.553 "ioat_scan_accel_module", 00:07:07.553 "dsa_scan_accel_module", 00:07:07.553 "iaa_scan_accel_module", 00:07:07.553 "keyring_file_remove_key", 00:07:07.553 "keyring_file_add_key", 00:07:07.553 "keyring_linux_set_options", 00:07:07.553 "fsdev_aio_delete", 00:07:07.553 "fsdev_aio_create", 00:07:07.553 "iscsi_get_histogram", 00:07:07.553 "iscsi_enable_histogram", 00:07:07.553 "iscsi_set_options", 00:07:07.553 "iscsi_get_auth_groups", 00:07:07.553 "iscsi_auth_group_remove_secret", 00:07:07.553 "iscsi_auth_group_add_secret", 00:07:07.553 "iscsi_delete_auth_group", 00:07:07.553 "iscsi_create_auth_group", 00:07:07.553 "iscsi_set_discovery_auth", 00:07:07.553 "iscsi_get_options", 00:07:07.553 "iscsi_target_node_request_logout", 00:07:07.553 "iscsi_target_node_set_redirect", 00:07:07.553 "iscsi_target_node_set_auth", 00:07:07.553 "iscsi_target_node_add_lun", 00:07:07.553 "iscsi_get_stats", 00:07:07.553 "iscsi_get_connections", 00:07:07.553 "iscsi_portal_group_set_auth", 00:07:07.553 "iscsi_start_portal_group", 00:07:07.553 "iscsi_delete_portal_group", 00:07:07.553 "iscsi_create_portal_group", 00:07:07.553 "iscsi_get_portal_groups", 00:07:07.553 "iscsi_delete_target_node", 00:07:07.553 "iscsi_target_node_remove_pg_ig_maps", 00:07:07.553 "iscsi_target_node_add_pg_ig_maps", 00:07:07.553 "iscsi_create_target_node", 00:07:07.553 "iscsi_get_target_nodes", 00:07:07.553 "iscsi_delete_initiator_group", 00:07:07.553 "iscsi_initiator_group_remove_initiators", 00:07:07.553 "iscsi_initiator_group_add_initiators", 00:07:07.553 "iscsi_create_initiator_group", 00:07:07.553 "iscsi_get_initiator_groups", 00:07:07.553 "nvmf_set_crdt", 00:07:07.553 "nvmf_set_config", 00:07:07.553 "nvmf_set_max_subsystems", 00:07:07.553 "nvmf_stop_mdns_prr", 00:07:07.553 "nvmf_publish_mdns_prr", 00:07:07.553 "nvmf_subsystem_get_listeners", 00:07:07.553 "nvmf_subsystem_get_qpairs", 00:07:07.553 "nvmf_subsystem_get_controllers", 00:07:07.553 "nvmf_get_stats", 00:07:07.553 "nvmf_get_transports", 00:07:07.553 "nvmf_create_transport", 00:07:07.553 "nvmf_get_targets", 00:07:07.553 "nvmf_delete_target", 00:07:07.553 "nvmf_create_target", 00:07:07.553 "nvmf_subsystem_allow_any_host", 00:07:07.553 "nvmf_subsystem_set_keys", 00:07:07.553 "nvmf_subsystem_remove_host", 00:07:07.553 "nvmf_subsystem_add_host", 00:07:07.553 "nvmf_ns_remove_host", 00:07:07.553 "nvmf_ns_add_host", 00:07:07.553 "nvmf_subsystem_remove_ns", 00:07:07.553 "nvmf_subsystem_set_ns_ana_group", 00:07:07.553 "nvmf_subsystem_add_ns", 00:07:07.553 "nvmf_subsystem_listener_set_ana_state", 00:07:07.553 "nvmf_discovery_get_referrals", 00:07:07.553 "nvmf_discovery_remove_referral", 00:07:07.553 "nvmf_discovery_add_referral", 00:07:07.553 "nvmf_subsystem_remove_listener", 00:07:07.553 "nvmf_subsystem_add_listener", 00:07:07.553 "nvmf_delete_subsystem", 00:07:07.553 "nvmf_create_subsystem", 00:07:07.553 "nvmf_get_subsystems", 00:07:07.553 "env_dpdk_get_mem_stats", 00:07:07.553 "nbd_get_disks", 00:07:07.553 "nbd_stop_disk", 00:07:07.553 "nbd_start_disk", 00:07:07.553 "ublk_recover_disk", 00:07:07.553 "ublk_get_disks", 00:07:07.553 "ublk_stop_disk", 00:07:07.553 "ublk_start_disk", 00:07:07.553 "ublk_destroy_target", 00:07:07.553 "ublk_create_target", 00:07:07.553 "virtio_blk_create_transport", 00:07:07.553 "virtio_blk_get_transports", 00:07:07.553 "vhost_controller_set_coalescing", 00:07:07.553 "vhost_get_controllers", 00:07:07.553 "vhost_delete_controller", 00:07:07.553 "vhost_create_blk_controller", 00:07:07.553 "vhost_scsi_controller_remove_target", 00:07:07.553 "vhost_scsi_controller_add_target", 00:07:07.553 "vhost_start_scsi_controller", 00:07:07.553 "vhost_create_scsi_controller", 00:07:07.553 "thread_set_cpumask", 00:07:07.553 "scheduler_set_options", 00:07:07.553 "framework_get_governor", 00:07:07.553 "framework_get_scheduler", 00:07:07.553 "framework_set_scheduler", 00:07:07.553 "framework_get_reactors", 00:07:07.553 "thread_get_io_channels", 00:07:07.553 "thread_get_pollers", 00:07:07.553 "thread_get_stats", 00:07:07.553 "framework_monitor_context_switch", 00:07:07.553 "spdk_kill_instance", 00:07:07.553 "log_enable_timestamps", 00:07:07.553 "log_get_flags", 00:07:07.553 "log_clear_flag", 00:07:07.553 "log_set_flag", 00:07:07.553 "log_get_level", 00:07:07.553 "log_set_level", 00:07:07.553 "log_get_print_level", 00:07:07.553 "log_set_print_level", 00:07:07.553 "framework_enable_cpumask_locks", 00:07:07.553 "framework_disable_cpumask_locks", 00:07:07.553 "framework_wait_init", 00:07:07.553 "framework_start_init", 00:07:07.553 "scsi_get_devices", 00:07:07.553 "bdev_get_histogram", 00:07:07.553 "bdev_enable_histogram", 00:07:07.553 "bdev_set_qos_limit", 00:07:07.553 "bdev_set_qd_sampling_period", 00:07:07.553 "bdev_get_bdevs", 00:07:07.553 "bdev_reset_iostat", 00:07:07.553 "bdev_get_iostat", 00:07:07.553 "bdev_examine", 00:07:07.553 "bdev_wait_for_examine", 00:07:07.553 "bdev_set_options", 00:07:07.553 "accel_get_stats", 00:07:07.553 "accel_set_options", 00:07:07.553 "accel_set_driver", 00:07:07.553 "accel_crypto_key_destroy", 00:07:07.553 "accel_crypto_keys_get", 00:07:07.553 "accel_crypto_key_create", 00:07:07.553 "accel_assign_opc", 00:07:07.553 "accel_get_module_info", 00:07:07.553 "accel_get_opc_assignments", 00:07:07.553 "vmd_rescan", 00:07:07.553 "vmd_remove_device", 00:07:07.553 "vmd_enable", 00:07:07.553 "sock_get_default_impl", 00:07:07.553 "sock_set_default_impl", 00:07:07.553 "sock_impl_set_options", 00:07:07.553 "sock_impl_get_options", 00:07:07.553 "iobuf_get_stats", 00:07:07.553 "iobuf_set_options", 00:07:07.553 "keyring_get_keys", 00:07:07.553 "framework_get_pci_devices", 00:07:07.553 "framework_get_config", 00:07:07.553 "framework_get_subsystems", 00:07:07.553 "fsdev_set_opts", 00:07:07.553 "fsdev_get_opts", 00:07:07.553 "trace_get_info", 00:07:07.553 "trace_get_tpoint_group_mask", 00:07:07.553 "trace_disable_tpoint_group", 00:07:07.553 "trace_enable_tpoint_group", 00:07:07.553 "trace_clear_tpoint_mask", 00:07:07.553 "trace_set_tpoint_mask", 00:07:07.553 "notify_get_notifications", 00:07:07.553 "notify_get_types", 00:07:07.553 "spdk_get_version", 00:07:07.553 "rpc_get_methods" 00:07:07.553 ] 00:07:07.553 11:02:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.553 11:02:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:07.553 11:02:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70605 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70605 ']' 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70605 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70605 00:07:07.553 killing process with pid 70605 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70605' 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70605 00:07:07.553 11:02:36 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70605 00:07:07.814 ************************************ 00:07:07.814 END TEST spdkcli_tcp 00:07:07.814 ************************************ 00:07:07.814 00:07:07.814 real 0m1.607s 00:07:07.814 user 0m2.813s 00:07:07.814 sys 0m0.418s 00:07:07.814 11:02:36 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.814 11:02:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.814 11:02:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:07.814 11:02:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.814 11:02:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.814 11:02:36 -- common/autotest_common.sh@10 -- # set +x 00:07:08.074 ************************************ 00:07:08.074 START TEST dpdk_mem_utility 00:07:08.074 ************************************ 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:08.074 * Looking for test storage... 00:07:08.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.074 11:02:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.074 --rc genhtml_branch_coverage=1 00:07:08.074 --rc genhtml_function_coverage=1 00:07:08.074 --rc genhtml_legend=1 00:07:08.074 --rc geninfo_all_blocks=1 00:07:08.074 --rc geninfo_unexecuted_blocks=1 00:07:08.074 00:07:08.074 ' 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.074 --rc genhtml_branch_coverage=1 00:07:08.074 --rc genhtml_function_coverage=1 00:07:08.074 --rc genhtml_legend=1 00:07:08.074 --rc geninfo_all_blocks=1 00:07:08.074 --rc geninfo_unexecuted_blocks=1 00:07:08.074 00:07:08.074 ' 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.074 --rc genhtml_branch_coverage=1 00:07:08.074 --rc genhtml_function_coverage=1 00:07:08.074 --rc genhtml_legend=1 00:07:08.074 --rc geninfo_all_blocks=1 00:07:08.074 --rc geninfo_unexecuted_blocks=1 00:07:08.074 00:07:08.074 ' 00:07:08.074 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.074 --rc genhtml_branch_coverage=1 00:07:08.074 --rc genhtml_function_coverage=1 00:07:08.074 --rc genhtml_legend=1 00:07:08.074 --rc geninfo_all_blocks=1 00:07:08.074 --rc geninfo_unexecuted_blocks=1 00:07:08.074 00:07:08.074 ' 00:07:08.074 11:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:08.074 11:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70705 00:07:08.075 11:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70705 00:07:08.075 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70705 ']' 00:07:08.075 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.075 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.075 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.075 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.075 11:02:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.075 11:02:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.075 [2024-11-27 11:02:36.923463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.075 [2024-11-27 11:02:36.923602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70705 ] 00:07:08.335 [2024-11-27 11:02:37.071448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.335 [2024-11-27 11:02:37.104225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.903 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.903 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:08.903 11:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:08.903 11:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:08.903 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.903 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.903 { 00:07:08.903 "filename": "/tmp/spdk_mem_dump.txt" 00:07:08.903 } 00:07:08.903 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.903 11:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:09.165 DPDK memory size 860.000000 MiB in 1 heap(s) 00:07:09.165 1 heaps totaling size 860.000000 MiB 00:07:09.165 size: 860.000000 MiB heap id: 0 00:07:09.165 end heaps---------- 00:07:09.165 9 mempools totaling size 642.649841 MiB 00:07:09.165 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:09.165 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:09.165 size: 92.545471 MiB name: bdev_io_70705 00:07:09.165 size: 51.011292 MiB name: evtpool_70705 00:07:09.165 size: 50.003479 MiB name: msgpool_70705 00:07:09.165 size: 36.509338 MiB name: fsdev_io_70705 00:07:09.165 size: 21.763794 MiB name: PDU_Pool 00:07:09.165 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:09.165 size: 0.026123 MiB name: Session_Pool 00:07:09.165 end mempools------- 00:07:09.165 6 memzones totaling size 4.142822 MiB 00:07:09.165 size: 1.000366 MiB name: RG_ring_0_70705 00:07:09.165 size: 1.000366 MiB name: RG_ring_1_70705 00:07:09.165 size: 1.000366 MiB name: RG_ring_4_70705 00:07:09.165 size: 1.000366 MiB name: RG_ring_5_70705 00:07:09.165 size: 0.125366 MiB name: RG_ring_2_70705 00:07:09.165 size: 0.015991 MiB name: RG_ring_3_70705 00:07:09.165 end memzones------- 00:07:09.165 11:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:09.165 heap id: 0 total size: 860.000000 MiB number of busy elements: 325 number of free elements: 16 00:07:09.165 list of free elements. size: 13.933228 MiB 00:07:09.165 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:09.165 element at address: 0x200000800000 with size: 1.996948 MiB 00:07:09.165 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:07:09.165 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:09.165 element at address: 0x200034a00000 with size: 0.994446 MiB 00:07:09.165 element at address: 0x200009600000 with size: 0.959839 MiB 00:07:09.165 element at address: 0x200015e00000 with size: 0.954285 MiB 00:07:09.165 element at address: 0x20001c000000 with size: 0.936584 MiB 00:07:09.165 element at address: 0x200000200000 with size: 0.835022 MiB 00:07:09.165 element at address: 0x20001d800000 with size: 0.566589 MiB 00:07:09.165 element at address: 0x20000d800000 with size: 0.489258 MiB 00:07:09.165 element at address: 0x200003e00000 with size: 0.487183 MiB 00:07:09.165 element at address: 0x20001c200000 with size: 0.485657 MiB 00:07:09.165 element at address: 0x200007000000 with size: 0.480286 MiB 00:07:09.165 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:07:09.165 element at address: 0x200003a00000 with size: 0.352112 MiB 00:07:09.165 list of standard malloc elements. size: 199.270081 MiB 00:07:09.165 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:07:09.165 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:07:09.165 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:07:09.165 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:09.165 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:09.165 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:09.165 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:07:09.165 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:09.165 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:07:09.165 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:09.165 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003aff880 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707af40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b000 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b180 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b240 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b300 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b480 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b540 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b600 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:07:09.166 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8910c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891180 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891240 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891300 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891480 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891540 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891600 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891780 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891840 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891900 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892080 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892140 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892200 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892380 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892440 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892500 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892680 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892740 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892800 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892980 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893040 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893100 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893280 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893340 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893400 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893580 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893640 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d893700 with size: 0.000183 MiB 00:07:09.166 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893880 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893940 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894000 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894180 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894240 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894300 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894480 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894540 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894600 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894780 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894840 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894900 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d895080 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d895140 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d895200 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d895380 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20001d895440 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:07:09.167 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:07:09.167 list of memzone associated elements. size: 646.796692 MiB 00:07:09.167 element at address: 0x20001d895500 with size: 211.416748 MiB 00:07:09.167 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:09.167 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:07:09.167 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:09.167 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:07:09.167 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70705_0 00:07:09.167 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:09.167 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70705_0 00:07:09.167 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:09.167 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70705_0 00:07:09.167 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:07:09.167 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70705_0 00:07:09.167 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:07:09.167 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:09.167 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:07:09.167 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:09.167 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:09.167 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70705 00:07:09.167 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:09.167 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70705 00:07:09.167 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:09.167 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70705 00:07:09.167 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:07:09.167 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:09.167 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:07:09.167 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:09.167 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:07:09.167 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:09.167 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:07:09.168 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:09.168 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:09.168 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70705 00:07:09.168 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:09.168 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70705 00:07:09.168 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:07:09.168 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70705 00:07:09.168 element at address: 0x200034afe940 with size: 1.000488 MiB 00:07:09.168 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70705 00:07:09.168 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:07:09.168 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70705 00:07:09.168 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:07:09.168 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70705 00:07:09.168 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:07:09.168 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:09.168 element at address: 0x20000707b780 with size: 0.500488 MiB 00:07:09.168 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:09.168 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:07:09.168 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:09.168 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:07:09.168 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70705 00:07:09.168 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:07:09.168 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:09.168 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:07:09.168 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:09.168 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:07:09.168 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70705 00:07:09.168 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:07:09.168 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:09.168 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:09.168 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70705 00:07:09.168 element at address: 0x200003aff940 with size: 0.000305 MiB 00:07:09.168 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70705 00:07:09.168 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:07:09.168 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70705 00:07:09.168 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:07:09.168 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:09.168 11:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:09.168 11:02:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70705 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70705 ']' 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70705 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70705 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.168 killing process with pid 70705 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70705' 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70705 00:07:09.168 11:02:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70705 00:07:09.426 00:07:09.426 real 0m1.456s 00:07:09.426 user 0m1.479s 00:07:09.426 sys 0m0.384s 00:07:09.426 11:02:38 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.426 11:02:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.426 ************************************ 00:07:09.426 END TEST dpdk_mem_utility 00:07:09.426 ************************************ 00:07:09.426 11:02:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:09.426 11:02:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.426 11:02:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.426 11:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:09.426 ************************************ 00:07:09.426 START TEST event 00:07:09.426 ************************************ 00:07:09.426 11:02:38 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:09.426 * Looking for test storage... 00:07:09.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:09.426 11:02:38 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.426 11:02:38 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.426 11:02:38 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:09.685 11:02:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.685 11:02:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.685 11:02:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.685 11:02:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.685 11:02:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.685 11:02:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.685 11:02:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.685 11:02:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.685 11:02:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.685 11:02:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.685 11:02:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.685 11:02:38 event -- scripts/common.sh@344 -- # case "$op" in 00:07:09.685 11:02:38 event -- scripts/common.sh@345 -- # : 1 00:07:09.685 11:02:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.685 11:02:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.685 11:02:38 event -- scripts/common.sh@365 -- # decimal 1 00:07:09.685 11:02:38 event -- scripts/common.sh@353 -- # local d=1 00:07:09.685 11:02:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.685 11:02:38 event -- scripts/common.sh@355 -- # echo 1 00:07:09.685 11:02:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.685 11:02:38 event -- scripts/common.sh@366 -- # decimal 2 00:07:09.685 11:02:38 event -- scripts/common.sh@353 -- # local d=2 00:07:09.685 11:02:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.685 11:02:38 event -- scripts/common.sh@355 -- # echo 2 00:07:09.685 11:02:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.685 11:02:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.685 11:02:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.685 11:02:38 event -- scripts/common.sh@368 -- # return 0 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.685 --rc genhtml_branch_coverage=1 00:07:09.685 --rc genhtml_function_coverage=1 00:07:09.685 --rc genhtml_legend=1 00:07:09.685 --rc geninfo_all_blocks=1 00:07:09.685 --rc geninfo_unexecuted_blocks=1 00:07:09.685 00:07:09.685 ' 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.685 --rc genhtml_branch_coverage=1 00:07:09.685 --rc genhtml_function_coverage=1 00:07:09.685 --rc genhtml_legend=1 00:07:09.685 --rc geninfo_all_blocks=1 00:07:09.685 --rc geninfo_unexecuted_blocks=1 00:07:09.685 00:07:09.685 ' 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.685 --rc genhtml_branch_coverage=1 00:07:09.685 --rc genhtml_function_coverage=1 00:07:09.685 --rc genhtml_legend=1 00:07:09.685 --rc geninfo_all_blocks=1 00:07:09.685 --rc geninfo_unexecuted_blocks=1 00:07:09.685 00:07:09.685 ' 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.685 --rc genhtml_branch_coverage=1 00:07:09.685 --rc genhtml_function_coverage=1 00:07:09.685 --rc genhtml_legend=1 00:07:09.685 --rc geninfo_all_blocks=1 00:07:09.685 --rc geninfo_unexecuted_blocks=1 00:07:09.685 00:07:09.685 ' 00:07:09.685 11:02:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:09.685 11:02:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:09.685 11:02:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:09.685 11:02:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.685 11:02:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.685 ************************************ 00:07:09.685 START TEST event_perf 00:07:09.685 ************************************ 00:07:09.685 11:02:38 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:09.685 Running I/O for 1 seconds...[2024-11-27 11:02:38.377819] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.685 [2024-11-27 11:02:38.377939] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70780 ] 00:07:09.685 [2024-11-27 11:02:38.525934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.685 [2024-11-27 11:02:38.559565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.685 [2024-11-27 11:02:38.560091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.685 Running I/O for 1 seconds...[2024-11-27 11:02:38.560186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.685 [2024-11-27 11:02:38.560448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.058 00:07:11.058 lcore 0: 146306 00:07:11.058 lcore 1: 146299 00:07:11.058 lcore 2: 146302 00:07:11.058 lcore 3: 146303 00:07:11.058 done. 00:07:11.058 ************************************ 00:07:11.058 END TEST event_perf 00:07:11.058 ************************************ 00:07:11.058 00:07:11.058 real 0m1.269s 00:07:11.058 user 0m4.061s 00:07:11.058 sys 0m0.085s 00:07:11.058 11:02:39 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.058 11:02:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.058 11:02:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:11.058 11:02:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:11.058 11:02:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.058 11:02:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.058 ************************************ 00:07:11.058 START TEST event_reactor 00:07:11.058 ************************************ 00:07:11.058 11:02:39 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:11.058 [2024-11-27 11:02:39.691025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.058 [2024-11-27 11:02:39.691138] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70819 ] 00:07:11.058 [2024-11-27 11:02:39.840685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.058 [2024-11-27 11:02:39.870736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.431 test_start 00:07:12.431 oneshot 00:07:12.431 tick 100 00:07:12.431 tick 100 00:07:12.431 tick 250 00:07:12.431 tick 100 00:07:12.431 tick 100 00:07:12.431 tick 100 00:07:12.431 tick 250 00:07:12.431 tick 500 00:07:12.431 tick 100 00:07:12.431 tick 100 00:07:12.432 tick 250 00:07:12.432 tick 100 00:07:12.432 tick 100 00:07:12.432 test_end 00:07:12.432 00:07:12.432 real 0m1.266s 00:07:12.432 user 0m1.090s 00:07:12.432 sys 0m0.069s 00:07:12.432 11:02:40 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.432 ************************************ 00:07:12.432 END TEST event_reactor 00:07:12.432 11:02:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:12.432 ************************************ 00:07:12.432 11:02:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:12.432 11:02:40 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:12.432 11:02:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.432 11:02:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.432 ************************************ 00:07:12.432 START TEST event_reactor_perf 00:07:12.432 ************************************ 00:07:12.432 11:02:40 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:12.432 [2024-11-27 11:02:40.995089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.432 [2024-11-27 11:02:40.995196] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70856 ] 00:07:12.432 [2024-11-27 11:02:41.141867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.432 [2024-11-27 11:02:41.172643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.365 test_start 00:07:13.365 test_end 00:07:13.365 Performance: 313351 events per second 00:07:13.365 00:07:13.365 real 0m1.263s 00:07:13.365 user 0m1.081s 00:07:13.365 sys 0m0.075s 00:07:13.365 11:02:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.365 11:02:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.365 ************************************ 00:07:13.365 END TEST event_reactor_perf 00:07:13.365 ************************************ 00:07:13.623 11:02:42 event -- event/event.sh@49 -- # uname -s 00:07:13.623 11:02:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:13.623 11:02:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:13.623 11:02:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.623 11:02:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.623 11:02:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.623 ************************************ 00:07:13.623 START TEST event_scheduler 00:07:13.623 ************************************ 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:13.623 * Looking for test storage... 00:07:13.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:13.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.623 11:02:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.623 --rc genhtml_branch_coverage=1 00:07:13.623 --rc genhtml_function_coverage=1 00:07:13.623 --rc genhtml_legend=1 00:07:13.623 --rc geninfo_all_blocks=1 00:07:13.623 --rc geninfo_unexecuted_blocks=1 00:07:13.623 00:07:13.623 ' 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.623 --rc genhtml_branch_coverage=1 00:07:13.623 --rc genhtml_function_coverage=1 00:07:13.623 --rc genhtml_legend=1 00:07:13.623 --rc geninfo_all_blocks=1 00:07:13.623 --rc geninfo_unexecuted_blocks=1 00:07:13.623 00:07:13.623 ' 00:07:13.623 11:02:42 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.623 --rc genhtml_branch_coverage=1 00:07:13.623 --rc genhtml_function_coverage=1 00:07:13.623 --rc genhtml_legend=1 00:07:13.623 --rc geninfo_all_blocks=1 00:07:13.623 --rc geninfo_unexecuted_blocks=1 00:07:13.623 00:07:13.623 ' 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.624 --rc genhtml_branch_coverage=1 00:07:13.624 --rc genhtml_function_coverage=1 00:07:13.624 --rc genhtml_legend=1 00:07:13.624 --rc geninfo_all_blocks=1 00:07:13.624 --rc geninfo_unexecuted_blocks=1 00:07:13.624 00:07:13.624 ' 00:07:13.624 11:02:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:13.624 11:02:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70921 00:07:13.624 11:02:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.624 11:02:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70921 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70921 ']' 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.624 11:02:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.624 11:02:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.624 [2024-11-27 11:02:42.486141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.624 [2024-11-27 11:02:42.486565] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:07:13.880 [2024-11-27 11:02:42.637841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.880 [2024-11-27 11:02:42.682696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.880 [2024-11-27 11:02:42.682946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.880 [2024-11-27 11:02:42.682985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.880 [2024-11-27 11:02:42.683117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:14.519 11:02:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.519 POWER: Cannot set governor of lcore 0 to userspace 00:07:14.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.519 POWER: Cannot set governor of lcore 0 to performance 00:07:14.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.519 POWER: Cannot set governor of lcore 0 to userspace 00:07:14.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:14.519 POWER: Cannot set governor of lcore 0 to userspace 00:07:14.519 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:14.519 POWER: Unable to set Power Management Environment for lcore 0 00:07:14.519 [2024-11-27 11:02:43.337059] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:14.519 [2024-11-27 11:02:43.337087] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:14.519 [2024-11-27 11:02:43.337097] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:14.519 [2024-11-27 11:02:43.337113] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:14.519 [2024-11-27 11:02:43.337121] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:14.519 [2024-11-27 11:02:43.337145] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.519 11:02:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.519 11:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 [2024-11-27 11:02:43.410845] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:14.776 11:02:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:14.776 11:02:43 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.776 11:02:43 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 ************************************ 00:07:14.776 START TEST scheduler_create_thread 00:07:14.776 ************************************ 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 2 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 3 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 4 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 5 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 6 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 7 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 8 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 9 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 10 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.776 11:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.339 11:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.339 11:02:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:15.339 11:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.339 11:02:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.708 11:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.708 11:02:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:16.708 11:02:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:16.708 11:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.708 11:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.642 11:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.642 00:07:17.642 real 0m3.091s 00:07:17.642 user 0m0.015s 00:07:17.642 sys 0m0.004s 00:07:17.642 11:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.642 11:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.642 ************************************ 00:07:17.642 END TEST scheduler_create_thread 00:07:17.642 ************************************ 00:07:17.901 11:02:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:17.901 11:02:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70921 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70921 ']' 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70921 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70921 00:07:17.901 killing process with pid 70921 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70921' 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70921 00:07:17.901 11:02:46 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70921 00:07:18.158 [2024-11-27 11:02:46.891583] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:18.416 ************************************ 00:07:18.416 END TEST event_scheduler 00:07:18.416 ************************************ 00:07:18.416 00:07:18.416 real 0m4.829s 00:07:18.416 user 0m9.025s 00:07:18.416 sys 0m0.379s 00:07:18.416 11:02:47 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.416 11:02:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.416 11:02:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:18.416 11:02:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:18.416 11:02:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.416 11:02:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.416 11:02:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.416 ************************************ 00:07:18.416 START TEST app_repeat 00:07:18.417 ************************************ 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:18.417 Process app_repeat pid: 71027 00:07:18.417 spdk_app_start Round 0 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71027 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71027' 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71027 /var/tmp/spdk-nbd.sock 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71027 ']' 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.417 11:02:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.417 11:02:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.417 [2024-11-27 11:02:47.191903] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.417 [2024-11-27 11:02:47.192017] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71027 ] 00:07:18.675 [2024-11-27 11:02:47.339457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.675 [2024-11-27 11:02:47.369834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.675 [2024-11-27 11:02:47.369853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.242 11:02:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.242 11:02:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:19.242 11:02:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.500 Malloc0 00:07:19.501 11:02:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.759 Malloc1 00:07:19.759 11:02:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.759 11:02:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:20.016 /dev/nbd0 00:07:20.017 11:02:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.017 11:02:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.017 1+0 records in 00:07:20.017 1+0 records out 00:07:20.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163549 s, 25.0 MB/s 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.017 11:02:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:20.017 11:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.017 11:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.017 11:02:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:20.275 /dev/nbd1 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.275 1+0 records in 00:07:20.275 1+0 records out 00:07:20.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168636 s, 24.3 MB/s 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.275 11:02:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.275 11:02:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.275 11:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd0", 00:07:20.275 "bdev_name": "Malloc0" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd1", 00:07:20.275 "bdev_name": "Malloc1" 00:07:20.275 } 00:07:20.275 ]' 00:07:20.275 11:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd0", 00:07:20.275 "bdev_name": "Malloc0" 00:07:20.275 }, 00:07:20.275 { 00:07:20.275 "nbd_device": "/dev/nbd1", 00:07:20.275 "bdev_name": "Malloc1" 00:07:20.275 } 00:07:20.275 ]' 00:07:20.275 11:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.533 11:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.533 /dev/nbd1' 00:07:20.533 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.533 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.533 /dev/nbd1' 00:07:20.533 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.534 256+0 records in 00:07:20.534 256+0 records out 00:07:20.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115702 s, 90.6 MB/s 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.534 256+0 records in 00:07:20.534 256+0 records out 00:07:20.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170564 s, 61.5 MB/s 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.534 256+0 records in 00:07:20.534 256+0 records out 00:07:20.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015682 s, 66.9 MB/s 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.534 11:02:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.792 11:02:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.050 11:02:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.308 11:02:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.308 11:02:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.308 11:02:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.565 [2024-11-27 11:02:50.249356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.565 [2024-11-27 11:02:50.278307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.565 [2024-11-27 11:02:50.278337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.565 [2024-11-27 11:02:50.308820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.565 [2024-11-27 11:02:50.308877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.847 spdk_app_start Round 1 00:07:24.847 11:02:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.847 11:02:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:24.847 11:02:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71027 /var/tmp/spdk-nbd.sock 00:07:24.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71027 ']' 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.847 11:02:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:24.847 11:02:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.847 Malloc0 00:07:24.847 11:02:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.106 Malloc1 00:07:25.106 11:02:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.106 11:02:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.363 /dev/nbd0 00:07:25.363 11:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.363 11:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.363 11:02:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:25.363 11:02:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:25.363 11:02:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.363 11:02:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.364 1+0 records in 00:07:25.364 1+0 records out 00:07:25.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000122286 s, 33.5 MB/s 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.364 11:02:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:25.364 11:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.364 11:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.364 11:02:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.364 /dev/nbd1 00:07:25.364 11:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.622 1+0 records in 00:07:25.622 1+0 records out 00:07:25.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237028 s, 17.3 MB/s 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.622 11:02:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.622 { 00:07:25.622 "nbd_device": "/dev/nbd0", 00:07:25.622 "bdev_name": "Malloc0" 00:07:25.622 }, 00:07:25.622 { 00:07:25.622 "nbd_device": "/dev/nbd1", 00:07:25.622 "bdev_name": "Malloc1" 00:07:25.622 } 00:07:25.622 ]' 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.622 { 00:07:25.622 "nbd_device": "/dev/nbd0", 00:07:25.622 "bdev_name": "Malloc0" 00:07:25.622 }, 00:07:25.622 { 00:07:25.622 "nbd_device": "/dev/nbd1", 00:07:25.622 "bdev_name": "Malloc1" 00:07:25.622 } 00:07:25.622 ]' 00:07:25.622 11:02:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.881 /dev/nbd1' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.881 /dev/nbd1' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:25.881 256+0 records in 00:07:25.881 256+0 records out 00:07:25.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00901199 s, 116 MB/s 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.881 256+0 records in 00:07:25.881 256+0 records out 00:07:25.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166565 s, 63.0 MB/s 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.881 256+0 records in 00:07:25.881 256+0 records out 00:07:25.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159931 s, 65.6 MB/s 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:25.881 11:02:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.882 11:02:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.140 11:02:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:26.398 11:02:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:26.398 11:02:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:26.657 11:02:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:26.657 [2024-11-27 11:02:55.500512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.657 [2024-11-27 11:02:55.529917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.657 [2024-11-27 11:02:55.529955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.915 [2024-11-27 11:02:55.562317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:26.915 [2024-11-27 11:02:55.562363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.197 spdk_app_start Round 2 00:07:30.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.197 11:02:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.197 11:02:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:30.197 11:02:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71027 /var/tmp/spdk-nbd.sock 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71027 ']' 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.197 11:02:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:30.197 11:02:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.197 Malloc0 00:07:30.197 11:02:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.197 Malloc1 00:07:30.197 11:02:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.197 11:02:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:30.455 /dev/nbd0 00:07:30.455 11:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:30.455 11:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.455 1+0 records in 00:07:30.455 1+0 records out 00:07:30.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213325 s, 19.2 MB/s 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.455 11:02:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:30.455 11:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.455 11:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.455 11:02:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:30.712 /dev/nbd1 00:07:30.712 11:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:30.712 11:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:30.712 11:02:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:30.712 11:02:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.713 1+0 records in 00:07:30.713 1+0 records out 00:07:30.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025819 s, 15.9 MB/s 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.713 11:02:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:30.713 11:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.713 11:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.713 11:02:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.713 11:02:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.713 11:02:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.971 { 00:07:30.971 "nbd_device": "/dev/nbd0", 00:07:30.971 "bdev_name": "Malloc0" 00:07:30.971 }, 00:07:30.971 { 00:07:30.971 "nbd_device": "/dev/nbd1", 00:07:30.971 "bdev_name": "Malloc1" 00:07:30.971 } 00:07:30.971 ]' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.971 { 00:07:30.971 "nbd_device": "/dev/nbd0", 00:07:30.971 "bdev_name": "Malloc0" 00:07:30.971 }, 00:07:30.971 { 00:07:30.971 "nbd_device": "/dev/nbd1", 00:07:30.971 "bdev_name": "Malloc1" 00:07:30.971 } 00:07:30.971 ]' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.971 /dev/nbd1' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.971 /dev/nbd1' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.971 256+0 records in 00:07:30.971 256+0 records out 00:07:30.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435355 s, 241 MB/s 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.971 256+0 records in 00:07:30.971 256+0 records out 00:07:30.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150193 s, 69.8 MB/s 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.971 256+0 records in 00:07:30.971 256+0 records out 00:07:30.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155757 s, 67.3 MB/s 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.971 11:02:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.230 11:02:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.230 11:03:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.488 11:03:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:31.747 11:03:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:31.747 11:03:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:32.005 11:03:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.005 [2024-11-27 11:03:00.768449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.005 [2024-11-27 11:03:00.797146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.006 [2024-11-27 11:03:00.797148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.006 [2024-11-27 11:03:00.826282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:32.006 [2024-11-27 11:03:00.826325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:35.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.326 11:03:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71027 /var/tmp/spdk-nbd.sock 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71027 ']' 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:35.326 11:03:03 event.app_repeat -- event/event.sh@39 -- # killprocess 71027 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71027 ']' 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71027 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71027 00:07:35.326 killing process with pid 71027 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71027' 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71027 00:07:35.326 11:03:03 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71027 00:07:35.326 spdk_app_start is called in Round 0. 00:07:35.326 Shutdown signal received, stop current app iteration 00:07:35.326 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.326 spdk_app_start is called in Round 1. 00:07:35.326 Shutdown signal received, stop current app iteration 00:07:35.326 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.326 spdk_app_start is called in Round 2. 00:07:35.326 Shutdown signal received, stop current app iteration 00:07:35.326 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.326 spdk_app_start is called in Round 3. 00:07:35.326 Shutdown signal received, stop current app iteration 00:07:35.326 ************************************ 00:07:35.326 END TEST app_repeat 00:07:35.326 ************************************ 00:07:35.326 11:03:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:35.326 11:03:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:35.326 00:07:35.326 real 0m16.883s 00:07:35.326 user 0m37.644s 00:07:35.326 sys 0m2.098s 00:07:35.326 11:03:04 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.326 11:03:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.326 11:03:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:35.326 11:03:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.326 11:03:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.326 11:03:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.326 11:03:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.326 ************************************ 00:07:35.326 START TEST cpu_locks 00:07:35.326 ************************************ 00:07:35.326 11:03:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.326 * Looking for test storage... 00:07:35.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:35.326 11:03:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.326 11:03:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.326 11:03:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.585 11:03:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.585 --rc genhtml_branch_coverage=1 00:07:35.585 --rc genhtml_function_coverage=1 00:07:35.585 --rc genhtml_legend=1 00:07:35.585 --rc geninfo_all_blocks=1 00:07:35.585 --rc geninfo_unexecuted_blocks=1 00:07:35.585 00:07:35.585 ' 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.585 --rc genhtml_branch_coverage=1 00:07:35.585 --rc genhtml_function_coverage=1 00:07:35.585 --rc genhtml_legend=1 00:07:35.585 --rc geninfo_all_blocks=1 00:07:35.585 --rc geninfo_unexecuted_blocks=1 00:07:35.585 00:07:35.585 ' 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.585 --rc genhtml_branch_coverage=1 00:07:35.585 --rc genhtml_function_coverage=1 00:07:35.585 --rc genhtml_legend=1 00:07:35.585 --rc geninfo_all_blocks=1 00:07:35.585 --rc geninfo_unexecuted_blocks=1 00:07:35.585 00:07:35.585 ' 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.585 --rc genhtml_branch_coverage=1 00:07:35.585 --rc genhtml_function_coverage=1 00:07:35.585 --rc genhtml_legend=1 00:07:35.585 --rc geninfo_all_blocks=1 00:07:35.585 --rc geninfo_unexecuted_blocks=1 00:07:35.585 00:07:35.585 ' 00:07:35.585 11:03:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:35.585 11:03:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:35.585 11:03:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:35.585 11:03:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.585 11:03:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.585 ************************************ 00:07:35.585 START TEST default_locks 00:07:35.585 ************************************ 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71446 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71446 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71446 ']' 00:07:35.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.585 11:03:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.585 [2024-11-27 11:03:04.320053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.585 [2024-11-27 11:03:04.320190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71446 ] 00:07:35.844 [2024-11-27 11:03:04.470681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.844 [2024-11-27 11:03:04.517004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.410 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.410 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:36.410 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71446 00:07:36.410 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71446 00:07:36.410 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71446 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71446 ']' 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71446 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71446 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.668 killing process with pid 71446 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71446' 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71446 00:07:36.668 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71446 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71446 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71446 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71446 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71446 ']' 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71446) - No such process 00:07:36.929 ERROR: process (pid: 71446) is no longer running 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:36.929 ************************************ 00:07:36.929 END TEST default_locks 00:07:36.929 ************************************ 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:36.929 00:07:36.929 real 0m1.360s 00:07:36.929 user 0m1.394s 00:07:36.929 sys 0m0.412s 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.929 11:03:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 11:03:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:36.929 11:03:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.929 11:03:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.929 11:03:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 ************************************ 00:07:36.929 START TEST default_locks_via_rpc 00:07:36.929 ************************************ 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71494 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71494 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71494 ']' 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.929 11:03:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.929 [2024-11-27 11:03:05.735161] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:36.929 [2024-11-27 11:03:05.735297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71494 ] 00:07:37.189 [2024-11-27 11:03:05.884196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.189 [2024-11-27 11:03:05.918612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71494 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71494 00:07:37.755 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.012 11:03:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71494 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71494 ']' 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71494 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71494 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.013 killing process with pid 71494 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71494' 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71494 00:07:38.013 11:03:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71494 00:07:38.270 00:07:38.270 real 0m1.383s 00:07:38.270 user 0m1.424s 00:07:38.270 sys 0m0.412s 00:07:38.270 11:03:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.270 ************************************ 00:07:38.270 END TEST default_locks_via_rpc 00:07:38.270 ************************************ 00:07:38.270 11:03:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.270 11:03:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:38.270 11:03:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.270 11:03:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.270 11:03:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.270 ************************************ 00:07:38.270 START TEST non_locking_app_on_locked_coremask 00:07:38.270 ************************************ 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71535 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71535 /var/tmp/spdk.sock 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71535 ']' 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.270 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.271 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.271 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.271 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.529 [2024-11-27 11:03:07.171737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.529 [2024-11-27 11:03:07.171858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71535 ] 00:07:38.529 [2024-11-27 11:03:07.313514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.529 [2024-11-27 11:03:07.344370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71551 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71551 /var/tmp/spdk2.sock 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71551 ']' 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.462 11:03:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.462 [2024-11-27 11:03:08.061397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.462 [2024-11-27 11:03:08.061505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71551 ] 00:07:39.462 [2024-11-27 11:03:08.208168] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.462 [2024-11-27 11:03:08.208208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.462 [2024-11-27 11:03:08.270880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.029 11:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.029 11:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:40.029 11:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71535 00:07:40.287 11:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71535 00:07:40.287 11:03:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71535 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71535 ']' 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71535 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71535 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.545 killing process with pid 71535 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71535' 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71535 00:07:40.545 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71535 00:07:41.145 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71551 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71551 ']' 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71551 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71551 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.146 killing process with pid 71551 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71551' 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71551 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71551 00:07:41.146 00:07:41.146 real 0m2.863s 00:07:41.146 user 0m3.172s 00:07:41.146 sys 0m0.780s 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.146 ************************************ 00:07:41.146 END TEST non_locking_app_on_locked_coremask 00:07:41.146 ************************************ 00:07:41.146 11:03:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.433 11:03:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:41.433 11:03:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.433 11:03:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.433 11:03:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.433 ************************************ 00:07:41.433 START TEST locking_app_on_unlocked_coremask 00:07:41.433 ************************************ 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71609 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71609 /var/tmp/spdk.sock 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71609 ']' 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:41.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.433 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.433 [2024-11-27 11:03:10.087715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.433 [2024-11-27 11:03:10.087823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71609 ] 00:07:41.433 [2024-11-27 11:03:10.230782] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.433 [2024-11-27 11:03:10.230820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.433 [2024-11-27 11:03:10.259400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71625 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71625 /var/tmp/spdk2.sock 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71625 ']' 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.999 11:03:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.258 [2024-11-27 11:03:10.950522] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.258 [2024-11-27 11:03:10.950628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71625 ] 00:07:42.258 [2024-11-27 11:03:11.097785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.516 [2024-11-27 11:03:11.160479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.086 11:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.086 11:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:43.086 11:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71625 00:07:43.086 11:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.086 11:03:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71625 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71609 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71609 ']' 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71609 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71609 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.347 killing process with pid 71609 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71609' 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71609 00:07:43.347 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71609 00:07:43.918 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71625 00:07:43.918 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71625 ']' 00:07:43.918 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71625 00:07:43.918 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71625 00:07:44.176 killing process with pid 71625 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71625' 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71625 00:07:44.176 11:03:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71625 00:07:44.433 00:07:44.433 real 0m3.133s 00:07:44.433 user 0m3.354s 00:07:44.433 sys 0m0.826s 00:07:44.433 11:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.433 11:03:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.433 ************************************ 00:07:44.433 END TEST locking_app_on_unlocked_coremask 00:07:44.433 ************************************ 00:07:44.433 11:03:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:44.433 11:03:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.433 11:03:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.433 11:03:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.433 ************************************ 00:07:44.433 START TEST locking_app_on_locked_coremask 00:07:44.434 ************************************ 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71683 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71683 /var/tmp/spdk.sock 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71683 ']' 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.434 11:03:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.434 [2024-11-27 11:03:13.256553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.434 [2024-11-27 11:03:13.256669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71683 ] 00:07:44.692 [2024-11-27 11:03:13.399285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.692 [2024-11-27 11:03:13.440867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71699 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71699 /var/tmp/spdk2.sock 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71699 /var/tmp/spdk2.sock 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:45.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71699 /var/tmp/spdk2.sock 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71699 ']' 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.260 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.518 [2024-11-27 11:03:14.164083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.518 [2024-11-27 11:03:14.164710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71699 ] 00:07:45.518 [2024-11-27 11:03:14.313016] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71683 has claimed it. 00:07:45.518 [2024-11-27 11:03:14.313071] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:46.084 ERROR: process (pid: 71699) is no longer running 00:07:46.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71699) - No such process 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71683 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71683 00:07:46.084 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.343 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71683 00:07:46.343 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71683 ']' 00:07:46.343 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71683 00:07:46.343 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:46.343 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.343 11:03:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71683 00:07:46.343 killing process with pid 71683 00:07:46.343 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.343 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.343 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71683' 00:07:46.343 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71683 00:07:46.343 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71683 00:07:46.601 ************************************ 00:07:46.601 END TEST locking_app_on_locked_coremask 00:07:46.601 ************************************ 00:07:46.601 00:07:46.601 real 0m2.152s 00:07:46.601 user 0m2.361s 00:07:46.601 sys 0m0.551s 00:07:46.601 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.601 11:03:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.601 11:03:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:46.601 11:03:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.601 11:03:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.601 11:03:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.601 ************************************ 00:07:46.601 START TEST locking_overlapped_coremask 00:07:46.601 ************************************ 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71741 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71741 /var/tmp/spdk.sock 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71741 ']' 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.601 11:03:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.601 [2024-11-27 11:03:15.462008] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.601 [2024-11-27 11:03:15.462135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71741 ] 00:07:46.859 [2024-11-27 11:03:15.610225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.859 [2024-11-27 11:03:15.652379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.859 [2024-11-27 11:03:15.652511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.859 [2024-11-27 11:03:15.652571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71759 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71759 /var/tmp/spdk2.sock 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71759 /var/tmp/spdk2.sock 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:47.425 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71759 /var/tmp/spdk2.sock 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71759 ']' 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.426 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.683 [2024-11-27 11:03:16.363723] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.683 [2024-11-27 11:03:16.363853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:07:47.683 [2024-11-27 11:03:16.520589] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71741 has claimed it. 00:07:47.683 [2024-11-27 11:03:16.520651] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:48.250 ERROR: process (pid: 71759) is no longer running 00:07:48.250 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71759) - No such process 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71741 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71741 ']' 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71741 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:48.250 11:03:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.250 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71741 00:07:48.250 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.250 killing process with pid 71741 00:07:48.250 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.250 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71741' 00:07:48.250 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71741 00:07:48.250 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71741 00:07:48.508 00:07:48.508 real 0m1.959s 00:07:48.508 user 0m5.303s 00:07:48.508 sys 0m0.444s 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 ************************************ 00:07:48.508 END TEST locking_overlapped_coremask 00:07:48.508 ************************************ 00:07:48.508 11:03:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:48.508 11:03:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.508 11:03:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.508 11:03:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 ************************************ 00:07:48.508 START TEST locking_overlapped_coremask_via_rpc 00:07:48.508 ************************************ 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71801 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71801 /var/tmp/spdk.sock 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71801 ']' 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 11:03:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:48.766 [2024-11-27 11:03:17.460398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.766 [2024-11-27 11:03:17.460651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71801 ] 00:07:48.766 [2024-11-27 11:03:17.605050] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.766 [2024-11-27 11:03:17.605263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.766 [2024-11-27 11:03:17.647361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.766 [2024-11-27 11:03:17.648257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.766 [2024-11-27 11:03:17.648280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71819 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71819 /var/tmp/spdk2.sock 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71819 ']' 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.700 11:03:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.700 [2024-11-27 11:03:18.353286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.700 [2024-11-27 11:03:18.353579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71819 ] 00:07:49.700 [2024-11-27 11:03:18.507832] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.700 [2024-11-27 11:03:18.507881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.700 [2024-11-27 11:03:18.579934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.700 [2024-11-27 11:03:18.580067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.700 [2024-11-27 11:03:18.580124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 [2024-11-27 11:03:19.175071] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71801 has claimed it. 00:07:50.634 request: 00:07:50.634 { 00:07:50.634 "method": "framework_enable_cpumask_locks", 00:07:50.634 "req_id": 1 00:07:50.634 } 00:07:50.634 Got JSON-RPC error response 00:07:50.634 response: 00:07:50.634 { 00:07:50.634 "code": -32603, 00:07:50.634 "message": "Failed to claim CPU core: 2" 00:07:50.634 } 00:07:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71801 /var/tmp/spdk.sock 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71801 ']' 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71819 /var/tmp/spdk2.sock 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71819 ']' 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.634 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:50.893 00:07:50.893 real 0m2.157s 00:07:50.893 user 0m0.964s 00:07:50.893 sys 0m0.129s 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.893 ************************************ 00:07:50.893 END TEST locking_overlapped_coremask_via_rpc 00:07:50.893 ************************************ 00:07:50.893 11:03:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.893 11:03:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:50.893 11:03:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71801 ]] 00:07:50.893 11:03:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71801 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71801 ']' 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71801 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71801 00:07:50.893 killing process with pid 71801 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71801' 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71801 00:07:50.893 11:03:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71801 00:07:51.151 11:03:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71819 ]] 00:07:51.151 11:03:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71819 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71819 ']' 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71819 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71819 00:07:51.151 killing process with pid 71819 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71819' 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71819 00:07:51.151 11:03:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71819 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:51.410 Process with pid 71801 is not found 00:07:51.410 Process with pid 71819 is not found 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71801 ]] 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71801 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71801 ']' 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71801 00:07:51.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71801) - No such process 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71801 is not found' 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71819 ]] 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71819 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71819 ']' 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71819 00:07:51.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71819) - No such process 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71819 is not found' 00:07:51.410 11:03:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:51.410 ************************************ 00:07:51.410 END TEST cpu_locks 00:07:51.410 ************************************ 00:07:51.410 00:07:51.410 real 0m16.133s 00:07:51.410 user 0m27.917s 00:07:51.410 sys 0m4.365s 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.410 11:03:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.410 ************************************ 00:07:51.410 END TEST event 00:07:51.410 ************************************ 00:07:51.410 00:07:51.410 real 0m42.049s 00:07:51.410 user 1m20.987s 00:07:51.410 sys 0m7.305s 00:07:51.410 11:03:20 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.410 11:03:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:51.410 11:03:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:51.410 11:03:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.410 11:03:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.410 11:03:20 -- common/autotest_common.sh@10 -- # set +x 00:07:51.410 ************************************ 00:07:51.410 START TEST thread 00:07:51.410 ************************************ 00:07:51.410 11:03:20 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:51.669 * Looking for test storage... 00:07:51.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:51.669 11:03:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.669 11:03:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.669 11:03:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.669 11:03:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.669 11:03:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.669 11:03:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.669 11:03:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.669 11:03:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.669 11:03:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.669 11:03:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.669 11:03:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.669 11:03:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:51.669 11:03:20 thread -- scripts/common.sh@345 -- # : 1 00:07:51.669 11:03:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.669 11:03:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.669 11:03:20 thread -- scripts/common.sh@365 -- # decimal 1 00:07:51.669 11:03:20 thread -- scripts/common.sh@353 -- # local d=1 00:07:51.669 11:03:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.669 11:03:20 thread -- scripts/common.sh@355 -- # echo 1 00:07:51.669 11:03:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.669 11:03:20 thread -- scripts/common.sh@366 -- # decimal 2 00:07:51.669 11:03:20 thread -- scripts/common.sh@353 -- # local d=2 00:07:51.669 11:03:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.669 11:03:20 thread -- scripts/common.sh@355 -- # echo 2 00:07:51.669 11:03:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.669 11:03:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.669 11:03:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.669 11:03:20 thread -- scripts/common.sh@368 -- # return 0 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.669 --rc genhtml_branch_coverage=1 00:07:51.669 --rc genhtml_function_coverage=1 00:07:51.669 --rc genhtml_legend=1 00:07:51.669 --rc geninfo_all_blocks=1 00:07:51.669 --rc geninfo_unexecuted_blocks=1 00:07:51.669 00:07:51.669 ' 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.669 --rc genhtml_branch_coverage=1 00:07:51.669 --rc genhtml_function_coverage=1 00:07:51.669 --rc genhtml_legend=1 00:07:51.669 --rc geninfo_all_blocks=1 00:07:51.669 --rc geninfo_unexecuted_blocks=1 00:07:51.669 00:07:51.669 ' 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.669 --rc genhtml_branch_coverage=1 00:07:51.669 --rc genhtml_function_coverage=1 00:07:51.669 --rc genhtml_legend=1 00:07:51.669 --rc geninfo_all_blocks=1 00:07:51.669 --rc geninfo_unexecuted_blocks=1 00:07:51.669 00:07:51.669 ' 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.669 --rc genhtml_branch_coverage=1 00:07:51.669 --rc genhtml_function_coverage=1 00:07:51.669 --rc genhtml_legend=1 00:07:51.669 --rc geninfo_all_blocks=1 00:07:51.669 --rc geninfo_unexecuted_blocks=1 00:07:51.669 00:07:51.669 ' 00:07:51.669 11:03:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.669 11:03:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.669 ************************************ 00:07:51.669 START TEST thread_poller_perf 00:07:51.669 ************************************ 00:07:51.669 11:03:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:51.669 [2024-11-27 11:03:20.470343] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.669 [2024-11-27 11:03:20.470554] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71946 ] 00:07:51.927 [2024-11-27 11:03:20.615328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.927 [2024-11-27 11:03:20.656279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.927 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:52.860 [2024-11-27T11:03:21.743Z] ====================================== 00:07:52.860 [2024-11-27T11:03:21.743Z] busy:2606383900 (cyc) 00:07:52.860 [2024-11-27T11:03:21.743Z] total_run_count: 401000 00:07:52.860 [2024-11-27T11:03:21.743Z] tsc_hz: 2600000000 (cyc) 00:07:52.860 [2024-11-27T11:03:21.743Z] ====================================== 00:07:52.860 [2024-11-27T11:03:21.744Z] poller_cost: 6499 (cyc), 2499 (nsec) 00:07:52.861 00:07:52.861 real 0m1.282s 00:07:52.861 user 0m1.103s 00:07:52.861 sys 0m0.073s 00:07:52.861 11:03:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.861 11:03:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:52.861 ************************************ 00:07:52.861 END TEST thread_poller_perf 00:07:52.861 ************************************ 00:07:53.119 11:03:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:53.119 11:03:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:53.119 11:03:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.119 11:03:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.119 ************************************ 00:07:53.119 START TEST thread_poller_perf 00:07:53.119 ************************************ 00:07:53.119 11:03:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:53.119 [2024-11-27 11:03:21.797430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.119 [2024-11-27 11:03:21.797658] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:07:53.119 [2024-11-27 11:03:21.944216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.119 [2024-11-27 11:03:21.984185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.119 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:54.493 [2024-11-27T11:03:23.376Z] ====================================== 00:07:54.493 [2024-11-27T11:03:23.376Z] busy:2602725352 (cyc) 00:07:54.493 [2024-11-27T11:03:23.376Z] total_run_count: 5197000 00:07:54.493 [2024-11-27T11:03:23.376Z] tsc_hz: 2600000000 (cyc) 00:07:54.493 [2024-11-27T11:03:23.376Z] ====================================== 00:07:54.493 [2024-11-27T11:03:23.376Z] poller_cost: 500 (cyc), 192 (nsec) 00:07:54.493 00:07:54.493 real 0m1.282s 00:07:54.493 user 0m1.109s 00:07:54.493 sys 0m0.067s 00:07:54.493 11:03:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.493 ************************************ 00:07:54.493 END TEST thread_poller_perf 00:07:54.493 ************************************ 00:07:54.493 11:03:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.493 11:03:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:54.493 ************************************ 00:07:54.493 END TEST thread 00:07:54.493 ************************************ 00:07:54.493 00:07:54.493 real 0m2.806s 00:07:54.493 user 0m2.322s 00:07:54.493 sys 0m0.262s 00:07:54.493 11:03:23 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.493 11:03:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.493 11:03:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:54.493 11:03:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.493 11:03:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.493 11:03:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.493 11:03:23 -- common/autotest_common.sh@10 -- # set +x 00:07:54.493 ************************************ 00:07:54.493 START TEST app_cmdline 00:07:54.493 ************************************ 00:07:54.493 11:03:23 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.493 * Looking for test storage... 00:07:54.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.493 11:03:23 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.493 11:03:23 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.493 11:03:23 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.493 11:03:23 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.493 11:03:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:54.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.494 11:03:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:03:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:54.494 11:03:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72066 00:07:54.494 11:03:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72066 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72066 ']' 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.494 11:03:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.494 11:03:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:54.494 [2024-11-27 11:03:23.343777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.494 [2024-11-27 11:03:23.343869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72066 ] 00:07:54.752 [2024-11-27 11:03:23.484773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.752 [2024-11-27 11:03:23.525837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.365 11:03:24 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.365 11:03:24 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:55.365 11:03:24 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:55.624 { 00:07:55.624 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:55.624 "fields": { 00:07:55.624 "major": 24, 00:07:55.624 "minor": 9, 00:07:55.624 "patch": 1, 00:07:55.624 "suffix": "-pre", 00:07:55.624 "commit": "b18e1bd62" 00:07:55.624 } 00:07:55.624 } 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:55.624 11:03:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:55.624 11:03:24 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.883 request: 00:07:55.883 { 00:07:55.883 "method": "env_dpdk_get_mem_stats", 00:07:55.883 "req_id": 1 00:07:55.883 } 00:07:55.883 Got JSON-RPC error response 00:07:55.883 response: 00:07:55.883 { 00:07:55.883 "code": -32601, 00:07:55.883 "message": "Method not found" 00:07:55.883 } 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.883 11:03:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72066 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72066 ']' 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72066 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72066 00:07:55.883 killing process with pid 72066 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72066' 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@969 -- # kill 72066 00:07:55.883 11:03:24 app_cmdline -- common/autotest_common.sh@974 -- # wait 72066 00:07:56.141 00:07:56.141 real 0m1.776s 00:07:56.141 user 0m2.049s 00:07:56.141 sys 0m0.425s 00:07:56.141 ************************************ 00:07:56.141 END TEST app_cmdline 00:07:56.141 ************************************ 00:07:56.141 11:03:24 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.141 11:03:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.141 11:03:24 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:56.141 11:03:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.141 11:03:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.141 11:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:56.141 ************************************ 00:07:56.141 START TEST version 00:07:56.141 ************************************ 00:07:56.141 11:03:24 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:56.400 * Looking for test storage... 00:07:56.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.400 11:03:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.400 11:03:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.400 11:03:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.400 11:03:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.400 11:03:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.400 11:03:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.400 11:03:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.400 11:03:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.400 11:03:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.400 11:03:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.400 11:03:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.400 11:03:25 version -- scripts/common.sh@344 -- # case "$op" in 00:07:56.400 11:03:25 version -- scripts/common.sh@345 -- # : 1 00:07:56.400 11:03:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.400 11:03:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.400 11:03:25 version -- scripts/common.sh@365 -- # decimal 1 00:07:56.400 11:03:25 version -- scripts/common.sh@353 -- # local d=1 00:07:56.400 11:03:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.400 11:03:25 version -- scripts/common.sh@355 -- # echo 1 00:07:56.400 11:03:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.400 11:03:25 version -- scripts/common.sh@366 -- # decimal 2 00:07:56.400 11:03:25 version -- scripts/common.sh@353 -- # local d=2 00:07:56.400 11:03:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.400 11:03:25 version -- scripts/common.sh@355 -- # echo 2 00:07:56.400 11:03:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.400 11:03:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.400 11:03:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.400 11:03:25 version -- scripts/common.sh@368 -- # return 0 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.400 --rc genhtml_branch_coverage=1 00:07:56.400 --rc genhtml_function_coverage=1 00:07:56.400 --rc genhtml_legend=1 00:07:56.400 --rc geninfo_all_blocks=1 00:07:56.400 --rc geninfo_unexecuted_blocks=1 00:07:56.400 00:07:56.400 ' 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.400 --rc genhtml_branch_coverage=1 00:07:56.400 --rc genhtml_function_coverage=1 00:07:56.400 --rc genhtml_legend=1 00:07:56.400 --rc geninfo_all_blocks=1 00:07:56.400 --rc geninfo_unexecuted_blocks=1 00:07:56.400 00:07:56.400 ' 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.400 --rc genhtml_branch_coverage=1 00:07:56.400 --rc genhtml_function_coverage=1 00:07:56.400 --rc genhtml_legend=1 00:07:56.400 --rc geninfo_all_blocks=1 00:07:56.400 --rc geninfo_unexecuted_blocks=1 00:07:56.400 00:07:56.400 ' 00:07:56.400 11:03:25 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.400 --rc genhtml_branch_coverage=1 00:07:56.400 --rc genhtml_function_coverage=1 00:07:56.400 --rc genhtml_legend=1 00:07:56.400 --rc geninfo_all_blocks=1 00:07:56.400 --rc geninfo_unexecuted_blocks=1 00:07:56.400 00:07:56.400 ' 00:07:56.400 11:03:25 version -- app/version.sh@17 -- # get_header_version major 00:07:56.400 11:03:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # cut -f2 00:07:56.400 11:03:25 version -- app/version.sh@17 -- # major=24 00:07:56.400 11:03:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # cut -f2 00:07:56.400 11:03:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.400 11:03:25 version -- app/version.sh@18 -- # minor=9 00:07:56.400 11:03:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:56.400 11:03:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # cut -f2 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.400 11:03:25 version -- app/version.sh@19 -- # patch=1 00:07:56.400 11:03:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # cut -f2 00:07:56.400 11:03:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.400 11:03:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.400 11:03:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:56.400 11:03:25 version -- app/version.sh@22 -- # version=24.9 00:07:56.400 11:03:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:56.401 11:03:25 version -- app/version.sh@25 -- # version=24.9.1 00:07:56.401 11:03:25 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:56.401 11:03:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:56.401 11:03:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:56.401 11:03:25 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:56.401 11:03:25 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:56.401 00:07:56.401 real 0m0.208s 00:07:56.401 user 0m0.129s 00:07:56.401 sys 0m0.103s 00:07:56.401 ************************************ 00:07:56.401 END TEST version 00:07:56.401 ************************************ 00:07:56.401 11:03:25 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.401 11:03:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:56.401 11:03:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:56.401 11:03:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:56.401 11:03:25 -- spdk/autotest.sh@194 -- # uname -s 00:07:56.401 11:03:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:56.401 11:03:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:56.401 11:03:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:56.401 11:03:25 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:56.401 11:03:25 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:56.401 11:03:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.401 11:03:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.401 11:03:25 -- common/autotest_common.sh@10 -- # set +x 00:07:56.401 ************************************ 00:07:56.401 START TEST blockdev_nvme 00:07:56.401 ************************************ 00:07:56.401 11:03:25 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:56.401 * Looking for test storage... 00:07:56.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:56.401 11:03:25 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:56.401 11:03:25 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:56.401 11:03:25 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.659 11:03:25 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.659 11:03:25 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:56.659 11:03:25 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.659 11:03:25 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.659 --rc genhtml_branch_coverage=1 00:07:56.659 --rc genhtml_function_coverage=1 00:07:56.659 --rc genhtml_legend=1 00:07:56.659 --rc geninfo_all_blocks=1 00:07:56.659 --rc geninfo_unexecuted_blocks=1 00:07:56.659 00:07:56.659 ' 00:07:56.659 11:03:25 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.660 --rc genhtml_branch_coverage=1 00:07:56.660 --rc genhtml_function_coverage=1 00:07:56.660 --rc genhtml_legend=1 00:07:56.660 --rc geninfo_all_blocks=1 00:07:56.660 --rc geninfo_unexecuted_blocks=1 00:07:56.660 00:07:56.660 ' 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.660 --rc genhtml_branch_coverage=1 00:07:56.660 --rc genhtml_function_coverage=1 00:07:56.660 --rc genhtml_legend=1 00:07:56.660 --rc geninfo_all_blocks=1 00:07:56.660 --rc geninfo_unexecuted_blocks=1 00:07:56.660 00:07:56.660 ' 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.660 --rc genhtml_branch_coverage=1 00:07:56.660 --rc genhtml_function_coverage=1 00:07:56.660 --rc genhtml_legend=1 00:07:56.660 --rc geninfo_all_blocks=1 00:07:56.660 --rc geninfo_unexecuted_blocks=1 00:07:56.660 00:07:56.660 ' 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:56.660 11:03:25 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:56.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72227 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72227 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72227 ']' 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.660 11:03:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.660 11:03:25 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:56.660 [2024-11-27 11:03:25.435065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.660 [2024-11-27 11:03:25.435187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72227 ] 00:07:56.918 [2024-11-27 11:03:25.580574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.918 [2024-11-27 11:03:25.614272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.486 11:03:26 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.486 11:03:26 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:57.486 11:03:26 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:57.486 11:03:26 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:57.486 11:03:26 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:57.486 11:03:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:57.486 11:03:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.486 11:03:26 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:57.486 11:03:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.486 11:03:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.745 11:03:26 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.745 11:03:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:57.745 11:03:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.745 11:03:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.745 11:03:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8e594ffc-0d6e-4108-976c-a4f8cd795418"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8e594ffc-0d6e-4108-976c-a4f8cd795418",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fefc4f15-883c-43a6-9d08-343a04a98720"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fefc4f15-883c-43a6-9d08-343a04a98720",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cd3af9d9-ab7f-447a-9c25-3e1debf701df"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd3af9d9-ab7f-447a-9c25-3e1debf701df",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5c64bfad-23dc-432b-b806-a54bad340482"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5c64bfad-23dc-432b-b806-a54bad340482",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d82385cc-ca85-4eeb-925d-1713c135b94b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d82385cc-ca85-4eeb-925d-1713c135b94b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "24aba2a9-3214-4a16-9d51-44a7db2c2cdf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "24aba2a9-3214-4a16-9d51-44a7db2c2cdf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:58.006 11:03:26 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72227 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72227 ']' 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72227 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:58.006 11:03:26 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.007 11:03:26 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72227 00:07:58.007 killing process with pid 72227 00:07:58.007 11:03:26 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.007 11:03:26 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.007 11:03:26 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72227' 00:07:58.007 11:03:26 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72227 00:07:58.007 11:03:26 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72227 00:07:58.266 11:03:27 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.266 11:03:27 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.266 11:03:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:58.266 11:03:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.266 11:03:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.266 ************************************ 00:07:58.266 START TEST bdev_hello_world 00:07:58.266 ************************************ 00:07:58.266 11:03:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.266 [2024-11-27 11:03:27.097247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:58.266 [2024-11-27 11:03:27.097414] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72300 ] 00:07:58.525 [2024-11-27 11:03:27.246845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.525 [2024-11-27 11:03:27.305427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.092 [2024-11-27 11:03:27.713419] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.092 [2024-11-27 11:03:27.713475] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:59.092 [2024-11-27 11:03:27.713502] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.092 [2024-11-27 11:03:27.715751] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.092 [2024-11-27 11:03:27.716344] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.092 [2024-11-27 11:03:27.716372] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.092 [2024-11-27 11:03:27.716569] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.092 00:07:59.092 [2024-11-27 11:03:27.716588] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:59.092 00:07:59.092 real 0m0.868s 00:07:59.092 user 0m0.540s 00:07:59.092 sys 0m0.222s 00:07:59.092 11:03:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.092 ************************************ 00:07:59.092 END TEST bdev_hello_world 00:07:59.092 ************************************ 00:07:59.092 11:03:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 11:03:27 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:59.092 11:03:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.092 11:03:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.092 11:03:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 ************************************ 00:07:59.092 START TEST bdev_bounds 00:07:59.092 ************************************ 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72330 00:07:59.092 Process bdevio pid: 72330 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72330' 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72330 00:07:59.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72330 ']' 00:07:59.092 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.093 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.093 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.093 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.093 11:03:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:59.351 [2024-11-27 11:03:28.032429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.351 [2024-11-27 11:03:28.032549] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72330 ] 00:07:59.351 [2024-11-27 11:03:28.180335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.351 [2024-11-27 11:03:28.216537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.351 [2024-11-27 11:03:28.217483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.351 [2024-11-27 11:03:28.217555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.285 11:03:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.285 11:03:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:00.285 11:03:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:00.285 I/O targets: 00:08:00.285 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:00.285 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:00.285 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.285 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.285 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.285 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:00.285 00:08:00.285 00:08:00.285 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.285 http://cunit.sourceforge.net/ 00:08:00.285 00:08:00.285 00:08:00.285 Suite: bdevio tests on: Nvme3n1 00:08:00.285 Test: blockdev write read block ...passed 00:08:00.285 Test: blockdev write zeroes read block ...passed 00:08:00.285 Test: blockdev write zeroes read no split ...passed 00:08:00.285 Test: blockdev write zeroes read split ...passed 00:08:00.285 Test: blockdev write zeroes read split partial ...passed 00:08:00.285 Test: blockdev reset ...[2024-11-27 11:03:28.984849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:00.285 passed 00:08:00.285 Test: blockdev write read 8 blocks ...[2024-11-27 11:03:28.987029] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.285 passed 00:08:00.285 Test: blockdev write read size > 128k ...passed 00:08:00.285 Test: blockdev write read invalid size ...passed 00:08:00.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.285 Test: blockdev write read max offset ...passed 00:08:00.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.285 Test: blockdev writev readv 8 blocks ...passed 00:08:00.285 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.285 Test: blockdev writev readv block ...passed 00:08:00.285 Test: blockdev writev readv size > 128k ...passed 00:08:00.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.285 Test: blockdev comparev and writev ...[2024-11-27 11:03:28.992463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cce06000 len:0x1000 00:08:00.285 [2024-11-27 11:03:28.992509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.285 passed 00:08:00.285 Test: blockdev nvme passthru rw ...passed 00:08:00.285 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.285 Test: blockdev nvme admin passthru ...[2024-11-27 11:03:28.993238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.285 [2024-11-27 11:03:28.993281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.285 passed 00:08:00.285 Test: blockdev copy ...passed 00:08:00.285 Suite: bdevio tests on: Nvme2n3 00:08:00.285 Test: blockdev write read block ...passed 00:08:00.285 Test: blockdev write zeroes read block ...passed 00:08:00.285 Test: blockdev write zeroes read no split ...passed 00:08:00.285 Test: blockdev write zeroes read split ...passed 00:08:00.285 Test: blockdev write zeroes read split partial ...passed 00:08:00.285 Test: blockdev reset ...[2024-11-27 11:03:29.005675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:00.285 [2024-11-27 11:03:29.007291] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.285 passed 00:08:00.285 Test: blockdev write read 8 blocks ...passed 00:08:00.285 Test: blockdev write read size > 128k ...passed 00:08:00.285 Test: blockdev write read invalid size ...passed 00:08:00.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.286 Test: blockdev write read max offset ...passed 00:08:00.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.286 Test: blockdev writev readv 8 blocks ...passed 00:08:00.286 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.286 Test: blockdev writev readv block ...passed 00:08:00.286 Test: blockdev writev readv size > 128k ...passed 00:08:00.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.286 Test: blockdev comparev and writev ...[2024-11-27 11:03:29.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfa05000 len:0x1000 00:08:00.286 [2024-11-27 11:03:29.012031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev nvme passthru rw ...passed 00:08:00.286 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.286 Test: blockdev nvme admin passthru ...[2024-11-27 11:03:29.012531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.286 [2024-11-27 11:03:29.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev copy ...passed 00:08:00.286 Suite: bdevio tests on: Nvme2n2 00:08:00.286 Test: blockdev write read block ...passed 00:08:00.286 Test: blockdev write zeroes read block ...passed 00:08:00.286 Test: blockdev write zeroes read no split ...passed 00:08:00.286 Test: blockdev write zeroes read split ...passed 00:08:00.286 Test: blockdev write zeroes read split partial ...passed 00:08:00.286 Test: blockdev reset ...[2024-11-27 11:03:29.026414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:00.286 [2024-11-27 11:03:29.027951] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.286 passed 00:08:00.286 Test: blockdev write read 8 blocks ...passed 00:08:00.286 Test: blockdev write read size > 128k ...passed 00:08:00.286 Test: blockdev write read invalid size ...passed 00:08:00.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.286 Test: blockdev write read max offset ...passed 00:08:00.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.286 Test: blockdev writev readv 8 blocks ...passed 00:08:00.286 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.286 Test: blockdev writev readv block ...passed 00:08:00.286 Test: blockdev writev readv size > 128k ...passed 00:08:00.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.286 Test: blockdev comparev and writev ...[2024-11-27 11:03:29.032327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfe36000 len:0x1000 00:08:00.286 [2024-11-27 11:03:29.032363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev nvme passthru rw ...passed 00:08:00.286 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.286 Test: blockdev nvme admin passthru ...[2024-11-27 11:03:29.033057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.286 [2024-11-27 11:03:29.033087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev copy ...passed 00:08:00.286 Suite: bdevio tests on: Nvme2n1 00:08:00.286 Test: blockdev write read block ...passed 00:08:00.286 Test: blockdev write zeroes read block ...passed 00:08:00.286 Test: blockdev write zeroes read no split ...passed 00:08:00.286 Test: blockdev write zeroes read split ...passed 00:08:00.286 Test: blockdev write zeroes read split partial ...passed 00:08:00.286 Test: blockdev reset ...[2024-11-27 11:03:29.047064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:00.286 passed 00:08:00.286 Test: blockdev write read 8 blocks ...[2024-11-27 11:03:29.048805] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.286 passed 00:08:00.286 Test: blockdev write read size > 128k ...passed 00:08:00.286 Test: blockdev write read invalid size ...passed 00:08:00.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.286 Test: blockdev write read max offset ...passed 00:08:00.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.286 Test: blockdev writev readv 8 blocks ...passed 00:08:00.286 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.286 Test: blockdev writev readv block ...passed 00:08:00.286 Test: blockdev writev readv size > 128k ...passed 00:08:00.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.286 Test: blockdev comparev and writev ...[2024-11-27 11:03:29.053738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfe30000 len:0x1000 00:08:00.286 [2024-11-27 11:03:29.053773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev nvme passthru rw ...passed 00:08:00.286 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.286 Test: blockdev nvme admin passthru ...[2024-11-27 11:03:29.054473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.286 [2024-11-27 11:03:29.054503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev copy ...passed 00:08:00.286 Suite: bdevio tests on: Nvme1n1 00:08:00.286 Test: blockdev write read block ...passed 00:08:00.286 Test: blockdev write zeroes read block ...passed 00:08:00.286 Test: blockdev write zeroes read no split ...passed 00:08:00.286 Test: blockdev write zeroes read split ...passed 00:08:00.286 Test: blockdev write zeroes read split partial ...passed 00:08:00.286 Test: blockdev reset ...[2024-11-27 11:03:29.067841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:00.286 passed 00:08:00.286 Test: blockdev write read 8 blocks ...[2024-11-27 11:03:29.069218] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.286 passed 00:08:00.286 Test: blockdev write read size > 128k ...passed 00:08:00.286 Test: blockdev write read invalid size ...passed 00:08:00.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.286 Test: blockdev write read max offset ...passed 00:08:00.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.286 Test: blockdev writev readv 8 blocks ...passed 00:08:00.286 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.286 Test: blockdev writev readv block ...passed 00:08:00.286 Test: blockdev writev readv size > 128k ...passed 00:08:00.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.286 Test: blockdev comparev and writev ...[2024-11-27 11:03:29.073648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:00.286 Test: blockdev nvme passthru rw ...passed 00:08:00.286 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.286 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2dfe2c000 len:0x1000 00:08:00.286 [2024-11-27 11:03:29.073832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.286 [2024-11-27 11:03:29.074294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.286 [2024-11-27 11:03:29.074320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev copy ...passed 00:08:00.286 Suite: bdevio tests on: Nvme0n1 00:08:00.286 Test: blockdev write read block ...passed 00:08:00.286 Test: blockdev write zeroes read block ...passed 00:08:00.286 Test: blockdev write zeroes read no split ...passed 00:08:00.286 Test: blockdev write zeroes read split ...passed 00:08:00.286 Test: blockdev write zeroes read split partial ...passed 00:08:00.286 Test: blockdev reset ...[2024-11-27 11:03:29.088700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:00.286 passed 00:08:00.286 Test: blockdev write read 8 blocks ...[2024-11-27 11:03:29.091738] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:00.286 passed 00:08:00.286 Test: blockdev write read size > 128k ...passed 00:08:00.286 Test: blockdev write read invalid size ...passed 00:08:00.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.286 Test: blockdev write read max offset ...passed 00:08:00.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.286 Test: blockdev writev readv 8 blocks ...passed 00:08:00.286 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.286 Test: blockdev writev readv block ...passed 00:08:00.286 Test: blockdev writev readv size > 128k ...passed 00:08:00.286 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.286 Test: blockdev comparev and writev ...passed 00:08:00.286 Test: blockdev nvme passthru rw ...[2024-11-27 11:03:29.096834] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:00.286 separate metadata which is not supported yet. 00:08:00.286 passed 00:08:00.286 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.286 Test: blockdev nvme admin passthru ...[2024-11-27 11:03:29.097600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:00.286 [2024-11-27 11:03:29.097637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:00.286 passed 00:08:00.286 Test: blockdev copy ...passed 00:08:00.286 00:08:00.286 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.286 suites 6 6 n/a 0 0 00:08:00.286 tests 138 138 138 0 0 00:08:00.286 asserts 893 893 893 0 n/a 00:08:00.286 00:08:00.286 Elapsed time = 0.281 seconds 00:08:00.286 0 00:08:00.286 11:03:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72330 00:08:00.286 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72330 ']' 00:08:00.286 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72330 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72330 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72330' 00:08:00.287 killing process with pid 72330 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72330 00:08:00.287 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72330 00:08:01.230 11:03:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:01.231 00:08:01.231 real 0m1.801s 00:08:01.231 user 0m4.712s 00:08:01.231 sys 0m0.303s 00:08:01.231 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.231 11:03:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:01.231 ************************************ 00:08:01.231 END TEST bdev_bounds 00:08:01.231 ************************************ 00:08:01.231 11:03:29 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:01.231 11:03:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:01.231 11:03:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.231 11:03:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.231 ************************************ 00:08:01.231 START TEST bdev_nbd 00:08:01.231 ************************************ 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72374 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72374 /var/tmp/spdk-nbd.sock 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72374 ']' 00:08:01.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.231 11:03:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:01.231 [2024-11-27 11:03:29.876441] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.231 [2024-11-27 11:03:29.876675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.231 [2024-11-27 11:03:30.023451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.231 [2024-11-27 11:03:30.057557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.166 1+0 records in 00:08:02.166 1+0 records out 00:08:02.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729202 s, 5.6 MB/s 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.166 11:03:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.424 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.424 1+0 records in 00:08:02.425 1+0 records out 00:08:02.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426836 s, 9.6 MB/s 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.425 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.682 1+0 records in 00:08:02.682 1+0 records out 00:08:02.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305156 s, 13.4 MB/s 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.682 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.940 1+0 records in 00:08:02.940 1+0 records out 00:08:02.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319364 s, 12.8 MB/s 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:02.940 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.198 1+0 records in 00:08:03.198 1+0 records out 00:08:03.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524246 s, 7.8 MB/s 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.198 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:03.199 11:03:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.457 1+0 records in 00:08:03.457 1+0 records out 00:08:03.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513949 s, 8.0 MB/s 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd0", 00:08:03.457 "bdev_name": "Nvme0n1" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd1", 00:08:03.457 "bdev_name": "Nvme1n1" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd2", 00:08:03.457 "bdev_name": "Nvme2n1" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd3", 00:08:03.457 "bdev_name": "Nvme2n2" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd4", 00:08:03.457 "bdev_name": "Nvme2n3" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd5", 00:08:03.457 "bdev_name": "Nvme3n1" 00:08:03.457 } 00:08:03.457 ]' 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd0", 00:08:03.457 "bdev_name": "Nvme0n1" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd1", 00:08:03.457 "bdev_name": "Nvme1n1" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd2", 00:08:03.457 "bdev_name": "Nvme2n1" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd3", 00:08:03.457 "bdev_name": "Nvme2n2" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd4", 00:08:03.457 "bdev_name": "Nvme2n3" 00:08:03.457 }, 00:08:03.457 { 00:08:03.457 "nbd_device": "/dev/nbd5", 00:08:03.457 "bdev_name": "Nvme3n1" 00:08:03.457 } 00:08:03.457 ]' 00:08:03.457 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.718 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.980 11:03:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.239 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.498 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.758 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.019 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.277 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:05.277 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:05.277 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.277 11:03:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.277 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:05.535 /dev/nbd0 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.535 1+0 records in 00:08:05.535 1+0 records out 00:08:05.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415378 s, 9.9 MB/s 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.535 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:05.797 /dev/nbd1 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.797 1+0 records in 00:08:05.797 1+0 records out 00:08:05.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387093 s, 10.6 MB/s 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.797 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:06.056 /dev/nbd10 00:08:06.056 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:06.056 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:06.056 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.057 1+0 records in 00:08:06.057 1+0 records out 00:08:06.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386109 s, 10.6 MB/s 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:06.057 /dev/nbd11 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.057 1+0 records in 00:08:06.057 1+0 records out 00:08:06.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381147 s, 10.7 MB/s 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.057 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.316 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.316 11:03:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.316 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.316 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.316 11:03:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:06.316 /dev/nbd12 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.316 1+0 records in 00:08:06.316 1+0 records out 00:08:06.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556749 s, 7.4 MB/s 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.316 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:06.574 /dev/nbd13 00:08:06.574 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:06.574 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:06.574 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:06.574 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.574 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.574 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.575 1+0 records in 00:08:06.575 1+0 records out 00:08:06.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445613 s, 9.2 MB/s 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.575 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.833 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.833 { 00:08:06.833 "nbd_device": "/dev/nbd0", 00:08:06.833 "bdev_name": "Nvme0n1" 00:08:06.833 }, 00:08:06.833 { 00:08:06.833 "nbd_device": "/dev/nbd1", 00:08:06.833 "bdev_name": "Nvme1n1" 00:08:06.833 }, 00:08:06.833 { 00:08:06.833 "nbd_device": "/dev/nbd10", 00:08:06.833 "bdev_name": "Nvme2n1" 00:08:06.833 }, 00:08:06.833 { 00:08:06.833 "nbd_device": "/dev/nbd11", 00:08:06.833 "bdev_name": "Nvme2n2" 00:08:06.833 }, 00:08:06.833 { 00:08:06.833 "nbd_device": "/dev/nbd12", 00:08:06.833 "bdev_name": "Nvme2n3" 00:08:06.833 }, 00:08:06.833 { 00:08:06.833 "nbd_device": "/dev/nbd13", 00:08:06.833 "bdev_name": "Nvme3n1" 00:08:06.834 } 00:08:06.834 ]' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.834 { 00:08:06.834 "nbd_device": "/dev/nbd0", 00:08:06.834 "bdev_name": "Nvme0n1" 00:08:06.834 }, 00:08:06.834 { 00:08:06.834 "nbd_device": "/dev/nbd1", 00:08:06.834 "bdev_name": "Nvme1n1" 00:08:06.834 }, 00:08:06.834 { 00:08:06.834 "nbd_device": "/dev/nbd10", 00:08:06.834 "bdev_name": "Nvme2n1" 00:08:06.834 }, 00:08:06.834 { 00:08:06.834 "nbd_device": "/dev/nbd11", 00:08:06.834 "bdev_name": "Nvme2n2" 00:08:06.834 }, 00:08:06.834 { 00:08:06.834 "nbd_device": "/dev/nbd12", 00:08:06.834 "bdev_name": "Nvme2n3" 00:08:06.834 }, 00:08:06.834 { 00:08:06.834 "nbd_device": "/dev/nbd13", 00:08:06.834 "bdev_name": "Nvme3n1" 00:08:06.834 } 00:08:06.834 ]' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.834 /dev/nbd1 00:08:06.834 /dev/nbd10 00:08:06.834 /dev/nbd11 00:08:06.834 /dev/nbd12 00:08:06.834 /dev/nbd13' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.834 /dev/nbd1 00:08:06.834 /dev/nbd10 00:08:06.834 /dev/nbd11 00:08:06.834 /dev/nbd12 00:08:06.834 /dev/nbd13' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:06.834 256+0 records in 00:08:06.834 256+0 records out 00:08:06.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00902342 s, 116 MB/s 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.834 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:07.092 256+0 records in 00:08:07.092 256+0 records out 00:08:07.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0890692 s, 11.8 MB/s 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:07.092 256+0 records in 00:08:07.092 256+0 records out 00:08:07.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0512411 s, 20.5 MB/s 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:07.092 256+0 records in 00:08:07.092 256+0 records out 00:08:07.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0493863 s, 21.2 MB/s 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:07.092 256+0 records in 00:08:07.092 256+0 records out 00:08:07.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0523216 s, 20.0 MB/s 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.092 11:03:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:07.351 256+0 records in 00:08:07.351 256+0 records out 00:08:07.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0861488 s, 12.2 MB/s 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:07.351 256+0 records in 00:08:07.351 256+0 records out 00:08:07.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0525284 s, 20.0 MB/s 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.351 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.608 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.868 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.149 11:03:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.409 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.667 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.923 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.923 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:08.924 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:08.924 malloc_lvol_verify 00:08:09.181 11:03:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:09.181 7c80d9c4-4dab-4cdc-8393-a6a3eb67151e 00:08:09.181 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:09.438 04d6dab9-4563-4a0c-b21a-99279c922468 00:08:09.438 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:09.695 /dev/nbd0 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:09.695 mke2fs 1.47.0 (5-Feb-2023) 00:08:09.695 Discarding device blocks: 0/4096 done 00:08:09.695 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:09.695 00:08:09.695 Allocating group tables: 0/1 done 00:08:09.695 Writing inode tables: 0/1 done 00:08:09.695 Creating journal (1024 blocks): done 00:08:09.695 Writing superblocks and filesystem accounting information: 0/1 done 00:08:09.695 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.695 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72374 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72374 ']' 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72374 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72374 00:08:09.952 killing process with pid 72374 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72374' 00:08:09.952 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72374 00:08:09.953 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72374 00:08:10.210 ************************************ 00:08:10.210 END TEST bdev_nbd 00:08:10.210 ************************************ 00:08:10.210 11:03:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:10.210 00:08:10.210 real 0m9.106s 00:08:10.210 user 0m13.393s 00:08:10.210 sys 0m3.101s 00:08:10.210 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.210 11:03:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:10.210 11:03:38 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:10.210 11:03:38 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:10.210 skipping fio tests on NVMe due to multi-ns failures. 00:08:10.210 11:03:38 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:10.210 11:03:38 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:10.210 11:03:38 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:10.210 11:03:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:10.210 11:03:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.210 11:03:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.210 ************************************ 00:08:10.210 START TEST bdev_verify 00:08:10.210 ************************************ 00:08:10.210 11:03:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:10.210 [2024-11-27 11:03:39.014297] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.210 [2024-11-27 11:03:39.014404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72742 ] 00:08:10.468 [2024-11-27 11:03:39.164534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.468 [2024-11-27 11:03:39.196258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.468 [2024-11-27 11:03:39.196297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.725 Running I/O for 5 seconds... 00:08:13.037 23488.00 IOPS, 91.75 MiB/s [2024-11-27T11:03:42.864Z] 24832.00 IOPS, 97.00 MiB/s [2024-11-27T11:03:43.803Z] 23317.33 IOPS, 91.08 MiB/s [2024-11-27T11:03:44.785Z] 22544.00 IOPS, 88.06 MiB/s [2024-11-27T11:03:44.785Z] 22988.80 IOPS, 89.80 MiB/s 00:08:15.902 Latency(us) 00:08:15.902 [2024-11-27T11:03:44.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.902 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x0 length 0xbd0bd 00:08:15.902 Nvme0n1 : 5.06 1872.84 7.32 0.00 0.00 68178.88 13409.67 152446.82 00:08:15.902 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:15.902 Nvme0n1 : 5.04 1931.77 7.55 0.00 0.00 66041.81 12552.66 73400.32 00:08:15.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x0 length 0xa0000 00:08:15.902 Nvme1n1 : 5.06 1872.34 7.31 0.00 0.00 68048.26 14518.74 154866.61 00:08:15.902 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0xa0000 length 0xa0000 00:08:15.902 Nvme1n1 : 5.04 1931.16 7.54 0.00 0.00 65968.34 14014.62 64931.05 00:08:15.902 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x0 length 0x80000 00:08:15.902 Nvme2n1 : 5.06 1871.84 7.31 0.00 0.00 67993.69 14518.74 157286.40 00:08:15.902 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x80000 length 0x80000 00:08:15.902 Nvme2n1 : 5.06 1935.25 7.56 0.00 0.00 65613.31 7208.96 57671.68 00:08:15.902 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x0 length 0x80000 00:08:15.902 Nvme2n2 : 5.07 1870.08 7.31 0.00 0.00 67898.34 16837.71 158899.59 00:08:15.902 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x80000 length 0x80000 00:08:15.902 Nvme2n2 : 5.07 1942.65 7.59 0.00 0.00 65362.41 9628.75 58074.98 00:08:15.902 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x0 length 0x80000 00:08:15.902 Nvme2n3 : 5.07 1869.49 7.30 0.00 0.00 67780.84 15728.64 167772.16 00:08:15.902 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x80000 length 0x80000 00:08:15.902 Nvme2n3 : 5.07 1942.14 7.59 0.00 0.00 65243.66 9931.22 58881.58 00:08:15.902 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x0 length 0x20000 00:08:15.902 Nvme3n1 : 5.07 1869.00 7.30 0.00 0.00 67657.87 8469.27 162125.98 00:08:15.902 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.902 Verification LBA range: start 0x20000 length 0x20000 00:08:15.902 Nvme3n1 : 5.08 1941.64 7.58 0.00 0.00 65126.31 8922.98 63317.86 00:08:15.902 [2024-11-27T11:03:44.785Z] =================================================================================================================== 00:08:15.902 [2024-11-27T11:03:44.785Z] Total : 22850.21 89.26 0.00 0.00 66721.49 7208.96 167772.16 00:08:17.314 00:08:17.314 real 0m6.826s 00:08:17.314 user 0m12.893s 00:08:17.314 sys 0m0.212s 00:08:17.314 11:03:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.314 ************************************ 00:08:17.314 END TEST bdev_verify 00:08:17.314 ************************************ 00:08:17.314 11:03:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:17.314 11:03:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.314 11:03:45 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:17.314 11:03:45 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.314 11:03:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.314 ************************************ 00:08:17.314 START TEST bdev_verify_big_io 00:08:17.314 ************************************ 00:08:17.314 11:03:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.314 [2024-11-27 11:03:45.927438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.314 [2024-11-27 11:03:45.927582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:08:17.314 [2024-11-27 11:03:46.074712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.314 [2024-11-27 11:03:46.130320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.314 [2024-11-27 11:03:46.130423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.884 Running I/O for 5 seconds... 00:08:22.339 656.00 IOPS, 41.00 MiB/s [2024-11-27T11:03:52.602Z] 1718.50 IOPS, 107.41 MiB/s [2024-11-27T11:03:52.602Z] 2373.67 IOPS, 148.35 MiB/s 00:08:23.719 Latency(us) 00:08:23.719 [2024-11-27T11:03:52.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.719 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x0 length 0xbd0b 00:08:23.719 Nvme0n1 : 5.69 130.68 8.17 0.00 0.00 944537.74 40128.20 948557.98 00:08:23.719 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:23.719 Nvme0n1 : 5.78 102.34 6.40 0.00 0.00 1209118.21 11796.48 980821.86 00:08:23.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x0 length 0xa000 00:08:23.719 Nvme1n1 : 5.70 134.81 8.43 0.00 0.00 901758.82 92758.65 806596.92 00:08:23.719 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0xa000 length 0xa000 00:08:23.719 Nvme1n1 : 5.83 96.67 6.04 0.00 0.00 1234565.24 94775.14 2090699.22 00:08:23.719 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x0 length 0x8000 00:08:23.719 Nvme2n1 : 5.70 134.73 8.42 0.00 0.00 875698.28 92355.35 819502.47 00:08:23.719 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x8000 length 0x8000 00:08:23.719 Nvme2n1 : 5.84 96.65 6.04 0.00 0.00 1194995.51 92355.35 2129415.88 00:08:23.719 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x0 length 0x8000 00:08:23.719 Nvme2n2 : 5.71 134.62 8.41 0.00 0.00 849572.37 94371.84 838860.80 00:08:23.719 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x8000 length 0x8000 00:08:23.719 Nvme2n2 : 5.84 100.88 6.31 0.00 0.00 1116237.53 44161.18 1845493.76 00:08:23.719 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x0 length 0x8000 00:08:23.719 Nvme2n3 : 5.83 150.34 9.40 0.00 0.00 745819.76 10384.94 864671.90 00:08:23.719 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x8000 length 0x8000 00:08:23.719 Nvme2n3 : 5.89 111.95 7.00 0.00 0.00 976455.69 12048.54 2232660.28 00:08:23.719 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x0 length 0x2000 00:08:23.719 Nvme3n1 : 5.83 153.71 9.61 0.00 0.00 707331.19 2760.07 884030.23 00:08:23.719 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.719 Verification LBA range: start 0x2000 length 0x2000 00:08:23.719 Nvme3n1 : 5.94 147.24 9.20 0.00 0.00 727513.29 354.46 2271376.94 00:08:23.719 [2024-11-27T11:03:52.602Z] =================================================================================================================== 00:08:23.719 [2024-11-27T11:03:52.602Z] Total : 1494.61 93.41 0.00 0.00 927045.38 354.46 2271376.94 00:08:24.661 00:08:24.661 real 0m7.518s 00:08:24.661 user 0m13.735s 00:08:24.661 sys 0m0.301s 00:08:24.661 11:03:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.661 ************************************ 00:08:24.661 END TEST bdev_verify_big_io 00:08:24.661 ************************************ 00:08:24.661 11:03:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:24.661 11:03:53 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.661 11:03:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:24.661 11:03:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.661 11:03:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.661 ************************************ 00:08:24.661 START TEST bdev_write_zeroes 00:08:24.661 ************************************ 00:08:24.661 11:03:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.661 [2024-11-27 11:03:53.520966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.661 [2024-11-27 11:03:53.521126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72940 ] 00:08:24.921 [2024-11-27 11:03:53.675922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.921 [2024-11-27 11:03:53.736018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.491 Running I/O for 1 seconds... 00:08:26.429 62528.00 IOPS, 244.25 MiB/s 00:08:26.429 Latency(us) 00:08:26.429 [2024-11-27T11:03:55.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.429 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.429 Nvme0n1 : 1.02 10394.28 40.60 0.00 0.00 12290.72 5167.26 23088.84 00:08:26.429 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.430 Nvme1n1 : 1.02 10382.30 40.56 0.00 0.00 12286.22 8822.15 22383.06 00:08:26.430 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.430 Nvme2n1 : 1.02 10370.55 40.51 0.00 0.00 12220.11 8771.74 19761.62 00:08:26.430 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.430 Nvme2n2 : 1.03 10358.90 40.46 0.00 0.00 12187.93 7259.37 19660.80 00:08:26.430 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.430 Nvme2n3 : 1.03 10346.83 40.42 0.00 0.00 12162.73 4839.58 19559.98 00:08:26.430 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.430 Nvme3n1 : 1.03 10272.94 40.13 0.00 0.00 12234.69 8670.92 23592.96 00:08:26.430 [2024-11-27T11:03:55.313Z] =================================================================================================================== 00:08:26.430 [2024-11-27T11:03:55.313Z] Total : 62125.80 242.68 0.00 0.00 12230.40 4839.58 23592.96 00:08:26.689 00:08:26.689 real 0m1.940s 00:08:26.689 user 0m1.583s 00:08:26.690 sys 0m0.240s 00:08:26.690 11:03:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.690 ************************************ 00:08:26.690 END TEST bdev_write_zeroes 00:08:26.690 ************************************ 00:08:26.690 11:03:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:26.690 11:03:55 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.690 11:03:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:26.690 11:03:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.690 11:03:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.690 ************************************ 00:08:26.690 START TEST bdev_json_nonenclosed 00:08:26.690 ************************************ 00:08:26.690 11:03:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.690 [2024-11-27 11:03:55.497080] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.690 [2024-11-27 11:03:55.497202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72982 ] 00:08:26.948 [2024-11-27 11:03:55.646224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.948 [2024-11-27 11:03:55.680861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.948 [2024-11-27 11:03:55.680983] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:26.948 [2024-11-27 11:03:55.681001] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.948 [2024-11-27 11:03:55.681012] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.948 00:08:26.948 real 0m0.323s 00:08:26.948 user 0m0.119s 00:08:26.948 sys 0m0.101s 00:08:26.948 11:03:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.948 ************************************ 00:08:26.948 END TEST bdev_json_nonenclosed 00:08:26.948 ************************************ 00:08:26.948 11:03:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:26.948 11:03:55 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.948 11:03:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:26.948 11:03:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.948 11:03:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.948 ************************************ 00:08:26.948 START TEST bdev_json_nonarray 00:08:26.948 ************************************ 00:08:26.948 11:03:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:27.207 [2024-11-27 11:03:55.867755] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.207 [2024-11-27 11:03:55.867867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:08:27.207 [2024-11-27 11:03:56.016861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.207 [2024-11-27 11:03:56.051324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.207 [2024-11-27 11:03:56.051418] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:27.207 [2024-11-27 11:03:56.051439] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:27.207 [2024-11-27 11:03:56.051449] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.467 00:08:27.467 real 0m0.327s 00:08:27.468 user 0m0.126s 00:08:27.468 sys 0m0.098s 00:08:27.468 11:03:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.468 11:03:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:27.468 ************************************ 00:08:27.468 END TEST bdev_json_nonarray 00:08:27.468 ************************************ 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:27.468 11:03:56 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:27.468 00:08:27.468 real 0m30.967s 00:08:27.468 user 0m49.075s 00:08:27.468 sys 0m5.287s 00:08:27.468 11:03:56 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.468 ************************************ 00:08:27.468 END TEST blockdev_nvme 00:08:27.468 ************************************ 00:08:27.468 11:03:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.468 11:03:56 -- spdk/autotest.sh@209 -- # uname -s 00:08:27.468 11:03:56 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:27.468 11:03:56 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:27.468 11:03:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.468 11:03:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.468 11:03:56 -- common/autotest_common.sh@10 -- # set +x 00:08:27.468 ************************************ 00:08:27.468 START TEST blockdev_nvme_gpt 00:08:27.468 ************************************ 00:08:27.468 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:27.468 * Looking for test storage... 00:08:27.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:27.468 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.468 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.468 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.468 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:27.468 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.727 11:03:56 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:27.727 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.727 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.727 --rc genhtml_branch_coverage=1 00:08:27.727 --rc genhtml_function_coverage=1 00:08:27.727 --rc genhtml_legend=1 00:08:27.727 --rc geninfo_all_blocks=1 00:08:27.727 --rc geninfo_unexecuted_blocks=1 00:08:27.727 00:08:27.727 ' 00:08:27.727 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.727 --rc genhtml_branch_coverage=1 00:08:27.727 --rc genhtml_function_coverage=1 00:08:27.727 --rc genhtml_legend=1 00:08:27.727 --rc geninfo_all_blocks=1 00:08:27.727 --rc geninfo_unexecuted_blocks=1 00:08:27.728 00:08:27.728 ' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.728 --rc genhtml_branch_coverage=1 00:08:27.728 --rc genhtml_function_coverage=1 00:08:27.728 --rc genhtml_legend=1 00:08:27.728 --rc geninfo_all_blocks=1 00:08:27.728 --rc geninfo_unexecuted_blocks=1 00:08:27.728 00:08:27.728 ' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.728 --rc genhtml_branch_coverage=1 00:08:27.728 --rc genhtml_function_coverage=1 00:08:27.728 --rc genhtml_legend=1 00:08:27.728 --rc geninfo_all_blocks=1 00:08:27.728 --rc geninfo_unexecuted_blocks=1 00:08:27.728 00:08:27.728 ' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73086 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73086 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73086 ']' 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.728 11:03:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.728 [2024-11-27 11:03:56.446700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.728 [2024-11-27 11:03:56.446972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73086 ] 00:08:27.728 [2024-11-27 11:03:56.588455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.987 [2024-11-27 11:03:56.622983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.560 11:03:57 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.560 11:03:57 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:08:28.560 11:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:28.560 11:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:28.560 11:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:28.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.078 Waiting for block devices as requested 00:08:29.078 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.078 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.078 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.336 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.597 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:34.597 BYT; 00:08:34.597 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:34.597 BYT; 00:08:34.597 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:34.597 11:04:03 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:34.597 11:04:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:35.530 The operation has completed successfully. 00:08:35.530 11:04:04 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:36.464 The operation has completed successfully. 00:08:36.464 11:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.291 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.291 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.291 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.291 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.291 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:37.291 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.291 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.291 [] 00:08:37.291 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.291 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:37.291 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:37.291 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:37.291 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.291 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:37.291 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.291 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.549 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.549 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:37.549 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.549 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.809 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.809 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:37.810 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a1b7ac01-f8ca-4b2e-99fa-23c724b3e9bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a1b7ac01-f8ca-4b2e-99fa-23c724b3e9bb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e3572ad5-a102-463b-8dc6-d963cdf16cda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3572ad5-a102-463b-8dc6-d963cdf16cda",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c8bce309-5f95-4fec-af6e-4d974549b361"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c8bce309-5f95-4fec-af6e-4d974549b361",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1023f9b6-bbb2-4a19-bae7-a7c95f9a68d7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1023f9b6-bbb2-4a19-bae7-a7c95f9a68d7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2fb8b3b4-6f80-4ad7-9ed1-1cc11b98f51a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2fb8b3b4-6f80-4ad7-9ed1-1cc11b98f51a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:37.810 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:37.810 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:37.810 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:37.810 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:37.810 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73086 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73086 ']' 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73086 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73086 00:08:37.810 killing process with pid 73086 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73086' 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73086 00:08:37.810 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73086 00:08:38.068 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:38.068 11:04:06 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:38.068 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:38.068 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.068 11:04:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.068 ************************************ 00:08:38.068 START TEST bdev_hello_world 00:08:38.068 ************************************ 00:08:38.068 11:04:06 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:38.068 [2024-11-27 11:04:06.945914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.068 [2024-11-27 11:04:06.946064] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73694 ] 00:08:38.326 [2024-11-27 11:04:07.094110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.326 [2024-11-27 11:04:07.128745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.892 [2024-11-27 11:04:07.500579] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:38.892 [2024-11-27 11:04:07.500626] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:38.892 [2024-11-27 11:04:07.500644] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:38.892 [2024-11-27 11:04:07.502712] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:38.892 [2024-11-27 11:04:07.503107] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:38.892 [2024-11-27 11:04:07.503134] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:38.892 [2024-11-27 11:04:07.503250] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:38.892 00:08:38.892 [2024-11-27 11:04:07.503271] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:38.892 ************************************ 00:08:38.892 END TEST bdev_hello_world 00:08:38.892 ************************************ 00:08:38.892 00:08:38.892 real 0m0.788s 00:08:38.892 user 0m0.520s 00:08:38.892 sys 0m0.165s 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:38.892 11:04:07 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:38.892 11:04:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.892 11:04:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.892 11:04:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.892 ************************************ 00:08:38.892 START TEST bdev_bounds 00:08:38.892 ************************************ 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:38.892 Process bdevio pid: 73725 00:08:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73725 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73725' 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73725 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73725 ']' 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.892 11:04:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:38.892 [2024-11-27 11:04:07.766079] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.892 [2024-11-27 11:04:07.766199] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73725 ] 00:08:39.150 [2024-11-27 11:04:07.905863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.150 [2024-11-27 11:04:07.942261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.150 [2024-11-27 11:04:07.942908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.150 [2024-11-27 11:04:07.942991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.796 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.796 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:39.796 11:04:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:40.056 I/O targets: 00:08:40.056 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:40.056 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:40.056 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:40.056 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.056 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.056 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.056 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:40.056 00:08:40.056 00:08:40.056 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.056 http://cunit.sourceforge.net/ 00:08:40.056 00:08:40.056 00:08:40.056 Suite: bdevio tests on: Nvme3n1 00:08:40.056 Test: blockdev write read block ...passed 00:08:40.056 Test: blockdev write zeroes read block ...passed 00:08:40.056 Test: blockdev write zeroes read no split ...passed 00:08:40.056 Test: blockdev write zeroes read split ...passed 00:08:40.056 Test: blockdev write zeroes read split partial ...passed 00:08:40.056 Test: blockdev reset ...[2024-11-27 11:04:08.722001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:40.056 [2024-11-27 11:04:08.723995] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.056 passed 00:08:40.056 Test: blockdev write read 8 blocks ...passed 00:08:40.056 Test: blockdev write read size > 128k ...passed 00:08:40.056 Test: blockdev write read invalid size ...passed 00:08:40.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.056 Test: blockdev write read max offset ...passed 00:08:40.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.056 Test: blockdev writev readv 8 blocks ...passed 00:08:40.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.056 Test: blockdev writev readv block ...passed 00:08:40.056 Test: blockdev writev readv size > 128k ...passed 00:08:40.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.056 Test: blockdev comparev and writev ...[2024-11-27 11:04:08.729325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6e0e000 len:0x1000 00:08:40.056 [2024-11-27 11:04:08.729378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.056 passed 00:08:40.056 Test: blockdev nvme passthru rw ...passed 00:08:40.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.056 Test: blockdev nvme admin passthru ...[2024-11-27 11:04:08.730116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.056 [2024-11-27 11:04:08.730148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.056 passed 00:08:40.056 Test: blockdev copy ...passed 00:08:40.056 Suite: bdevio tests on: Nvme2n3 00:08:40.056 Test: blockdev write read block ...passed 00:08:40.056 Test: blockdev write zeroes read block ...passed 00:08:40.056 Test: blockdev write zeroes read no split ...passed 00:08:40.056 Test: blockdev write zeroes read split ...passed 00:08:40.056 Test: blockdev write zeroes read split partial ...passed 00:08:40.056 Test: blockdev reset ...[2024-11-27 11:04:08.743105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.056 [2024-11-27 11:04:08.745406] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.056 passed 00:08:40.056 Test: blockdev write read 8 blocks ...passed 00:08:40.056 Test: blockdev write read size > 128k ...passed 00:08:40.056 Test: blockdev write read invalid size ...passed 00:08:40.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.056 Test: blockdev write read max offset ...passed 00:08:40.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.056 Test: blockdev writev readv 8 blocks ...passed 00:08:40.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.056 Test: blockdev writev readv block ...passed 00:08:40.056 Test: blockdev writev readv size > 128k ...passed 00:08:40.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.056 Test: blockdev comparev and writev ...[2024-11-27 11:04:08.750667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:40.056 Test: blockdev nvme passthru rw ...passed 00:08:40.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.056 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c6e0a000 len:0x1000 00:08:40.056 [2024-11-27 11:04:08.750857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.056 [2024-11-27 11:04:08.751307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.056 [2024-11-27 11:04:08.751333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.056 passed 00:08:40.056 Test: blockdev copy ...passed 00:08:40.056 Suite: bdevio tests on: Nvme2n2 00:08:40.056 Test: blockdev write read block ...passed 00:08:40.056 Test: blockdev write zeroes read block ...passed 00:08:40.056 Test: blockdev write zeroes read no split ...passed 00:08:40.056 Test: blockdev write zeroes read split ...passed 00:08:40.056 Test: blockdev write zeroes read split partial ...passed 00:08:40.056 Test: blockdev reset ...[2024-11-27 11:04:08.764282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.056 [2024-11-27 11:04:08.766056] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.056 passed 00:08:40.056 Test: blockdev write read 8 blocks ...passed 00:08:40.056 Test: blockdev write read size > 128k ...passed 00:08:40.056 Test: blockdev write read invalid size ...passed 00:08:40.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.056 Test: blockdev write read max offset ...passed 00:08:40.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.056 Test: blockdev writev readv 8 blocks ...passed 00:08:40.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.056 Test: blockdev writev readv block ...passed 00:08:40.056 Test: blockdev writev readv size > 128k ...passed 00:08:40.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.056 Test: blockdev comparev and writev ...[2024-11-27 11:04:08.771291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:40.056 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2dae05000 len:0x1000 00:08:40.056 [2024-11-27 11:04:08.771473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.056 passed 00:08:40.056 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.056 Test: blockdev nvme admin passthru ...[2024-11-27 11:04:08.772356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.056 [2024-11-27 11:04:08.772387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.056 passed 00:08:40.056 Test: blockdev copy ...passed 00:08:40.056 Suite: bdevio tests on: Nvme2n1 00:08:40.056 Test: blockdev write read block ...passed 00:08:40.056 Test: blockdev write zeroes read block ...passed 00:08:40.056 Test: blockdev write zeroes read no split ...passed 00:08:40.056 Test: blockdev write zeroes read split ...passed 00:08:40.056 Test: blockdev write zeroes read split partial ...passed 00:08:40.056 Test: blockdev reset ...[2024-11-27 11:04:08.784529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.056 passed 00:08:40.056 Test: blockdev write read 8 blocks ...[2024-11-27 11:04:08.786213] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.056 passed 00:08:40.056 Test: blockdev write read size > 128k ...passed 00:08:40.056 Test: blockdev write read invalid size ...passed 00:08:40.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.056 Test: blockdev write read max offset ...passed 00:08:40.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.056 Test: blockdev writev readv 8 blocks ...passed 00:08:40.056 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.056 Test: blockdev writev readv block ...passed 00:08:40.056 Test: blockdev writev readv size > 128k ...passed 00:08:40.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.057 Test: blockdev comparev and writev ...[2024-11-27 11:04:08.791115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:40.057 Test: blockdev nvme passthru rw ...passed 00:08:40.057 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.057 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c6a02000 len:0x1000 00:08:40.057 [2024-11-27 11:04:08.791306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.057 [2024-11-27 11:04:08.791784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.057 [2024-11-27 11:04:08.791811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.057 passed 00:08:40.057 Test: blockdev copy ...passed 00:08:40.057 Suite: bdevio tests on: Nvme1n1p2 00:08:40.057 Test: blockdev write read block ...passed 00:08:40.057 Test: blockdev write zeroes read block ...passed 00:08:40.057 Test: blockdev write zeroes read no split ...passed 00:08:40.057 Test: blockdev write zeroes read split ...passed 00:08:40.057 Test: blockdev write zeroes read split partial ...passed 00:08:40.057 Test: blockdev reset ...[2024-11-27 11:04:08.806164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:40.057 passed 00:08:40.057 Test: blockdev write read 8 blocks ...[2024-11-27 11:04:08.807726] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.057 passed 00:08:40.057 Test: blockdev write read size > 128k ...passed 00:08:40.057 Test: blockdev write read invalid size ...passed 00:08:40.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.057 Test: blockdev write read max offset ...passed 00:08:40.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.057 Test: blockdev writev readv 8 blocks ...passed 00:08:40.057 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.057 Test: blockdev writev readv block ...passed 00:08:40.057 Test: blockdev writev readv size > 128k ...passed 00:08:40.057 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.057 Test: blockdev comparev and writev ...[2024-11-27 11:04:08.812034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2de03b000 len:0x1000 00:08:40.057 [2024-11-27 11:04:08.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.057 passed 00:08:40.057 Test: blockdev nvme passthru rw ...passed 00:08:40.057 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.057 Test: blockdev nvme admin passthru ...passed 00:08:40.057 Test: blockdev copy ...passed 00:08:40.057 Suite: bdevio tests on: Nvme1n1p1 00:08:40.057 Test: blockdev write read block ...passed 00:08:40.057 Test: blockdev write zeroes read block ...passed 00:08:40.057 Test: blockdev write zeroes read no split ...passed 00:08:40.057 Test: blockdev write zeroes read split ...passed 00:08:40.057 Test: blockdev write zeroes read split partial ...passed 00:08:40.057 Test: blockdev reset ...[2024-11-27 11:04:08.823191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:40.057 passed 00:08:40.057 Test: blockdev write read 8 blocks ...[2024-11-27 11:04:08.824518] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.057 passed 00:08:40.057 Test: blockdev write read size > 128k ...passed 00:08:40.057 Test: blockdev write read invalid size ...passed 00:08:40.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.057 Test: blockdev write read max offset ...passed 00:08:40.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.057 Test: blockdev writev readv 8 blocks ...passed 00:08:40.057 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.057 Test: blockdev writev readv block ...passed 00:08:40.057 Test: blockdev writev readv size > 128k ...passed 00:08:40.057 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.057 Test: blockdev comparev and writev ...[2024-11-27 11:04:08.828611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2de037000 len:0x1000 00:08:40.057 [2024-11-27 11:04:08.828647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.057 passed 00:08:40.057 Test: blockdev nvme passthru rw ...passed 00:08:40.057 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.057 Test: blockdev nvme admin passthru ...passed 00:08:40.057 Test: blockdev copy ...passed 00:08:40.057 Suite: bdevio tests on: Nvme0n1 00:08:40.057 Test: blockdev write read block ...passed 00:08:40.057 Test: blockdev write zeroes read block ...passed 00:08:40.057 Test: blockdev write zeroes read no split ...passed 00:08:40.057 Test: blockdev write zeroes read split ...passed 00:08:40.057 Test: blockdev write zeroes read split partial ...passed 00:08:40.057 Test: blockdev reset ...[2024-11-27 11:04:08.844223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:40.057 [2024-11-27 11:04:08.845754] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.057 passed 00:08:40.057 Test: blockdev write read 8 blocks ...passed 00:08:40.057 Test: blockdev write read size > 128k ...passed 00:08:40.057 Test: blockdev write read invalid size ...passed 00:08:40.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.057 Test: blockdev write read max offset ...passed 00:08:40.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.057 Test: blockdev writev readv 8 blocks ...passed 00:08:40.057 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.057 Test: blockdev writev readv block ...passed 00:08:40.057 Test: blockdev writev readv size > 128k ...passed 00:08:40.057 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.057 Test: blockdev comparev and writev ...passed 00:08:40.057 Test: blockdev nvme passthru rw ...[2024-11-27 11:04:08.850076] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:40.057 separate metadata which is not supported yet. 00:08:40.057 passed 00:08:40.057 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.057 Test: blockdev nvme admin passthru ...[2024-11-27 11:04:08.850535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:40.057 [2024-11-27 11:04:08.850573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:40.057 passed 00:08:40.057 Test: blockdev copy ...passed 00:08:40.057 00:08:40.057 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.057 suites 7 7 n/a 0 0 00:08:40.057 tests 161 161 161 0 0 00:08:40.057 asserts 1025 1025 1025 0 n/a 00:08:40.057 00:08:40.057 Elapsed time = 0.338 seconds 00:08:40.057 0 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73725 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73725 ']' 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73725 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73725 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73725' 00:08:40.057 killing process with pid 73725 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73725 00:08:40.057 11:04:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73725 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:40.316 00:08:40.316 real 0m1.343s 00:08:40.316 user 0m3.363s 00:08:40.316 sys 0m0.286s 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:40.316 ************************************ 00:08:40.316 END TEST bdev_bounds 00:08:40.316 ************************************ 00:08:40.316 11:04:09 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.316 11:04:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:40.316 11:04:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.316 11:04:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.316 ************************************ 00:08:40.316 START TEST bdev_nbd 00:08:40.316 ************************************ 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73768 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73768 /var/tmp/spdk-nbd.sock 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73768 ']' 00:08:40.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.316 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:40.316 [2024-11-27 11:04:09.165139] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:40.316 [2024-11-27 11:04:09.165438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.574 [2024-11-27 11:04:09.321695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.574 [2024-11-27 11:04:09.356770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.141 11:04:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.399 1+0 records in 00:08:41.399 1+0 records out 00:08:41.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701244 s, 5.8 MB/s 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.399 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:41.657 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.658 1+0 records in 00:08:41.658 1+0 records out 00:08:41.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487051 s, 8.4 MB/s 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.658 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.917 1+0 records in 00:08:41.917 1+0 records out 00:08:41.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497139 s, 8.2 MB/s 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.917 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.175 1+0 records in 00:08:42.175 1+0 records out 00:08:42.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379041 s, 10.8 MB/s 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.175 11:04:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.433 1+0 records in 00:08:42.433 1+0 records out 00:08:42.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371636 s, 11.0 MB/s 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.433 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.691 1+0 records in 00:08:42.691 1+0 records out 00:08:42.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372445 s, 11.0 MB/s 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.691 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.951 1+0 records in 00:08:42.951 1+0 records out 00:08:42.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559871 s, 7.3 MB/s 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.951 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.210 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:43.210 { 00:08:43.210 "nbd_device": "/dev/nbd0", 00:08:43.210 "bdev_name": "Nvme0n1" 00:08:43.210 }, 00:08:43.210 { 00:08:43.210 "nbd_device": "/dev/nbd1", 00:08:43.210 "bdev_name": "Nvme1n1p1" 00:08:43.210 }, 00:08:43.210 { 00:08:43.210 "nbd_device": "/dev/nbd2", 00:08:43.210 "bdev_name": "Nvme1n1p2" 00:08:43.210 }, 00:08:43.210 { 00:08:43.210 "nbd_device": "/dev/nbd3", 00:08:43.210 "bdev_name": "Nvme2n1" 00:08:43.210 }, 00:08:43.210 { 00:08:43.210 "nbd_device": "/dev/nbd4", 00:08:43.211 "bdev_name": "Nvme2n2" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd5", 00:08:43.211 "bdev_name": "Nvme2n3" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd6", 00:08:43.211 "bdev_name": "Nvme3n1" 00:08:43.211 } 00:08:43.211 ]' 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd0", 00:08:43.211 "bdev_name": "Nvme0n1" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd1", 00:08:43.211 "bdev_name": "Nvme1n1p1" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd2", 00:08:43.211 "bdev_name": "Nvme1n1p2" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd3", 00:08:43.211 "bdev_name": "Nvme2n1" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd4", 00:08:43.211 "bdev_name": "Nvme2n2" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd5", 00:08:43.211 "bdev_name": "Nvme2n3" 00:08:43.211 }, 00:08:43.211 { 00:08:43.211 "nbd_device": "/dev/nbd6", 00:08:43.211 "bdev_name": "Nvme3n1" 00:08:43.211 } 00:08:43.211 ]' 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.211 11:04:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.469 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.727 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.728 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.728 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.986 11:04:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.245 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.503 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.761 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.762 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.020 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.020 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.020 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.020 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:45.020 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.021 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:45.278 /dev/nbd0 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.278 11:04:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.278 1+0 records in 00:08:45.278 1+0 records out 00:08:45.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600571 s, 6.8 MB/s 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.278 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:45.537 /dev/nbd1 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.537 1+0 records in 00:08:45.537 1+0 records out 00:08:45.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538701 s, 7.6 MB/s 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.537 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:45.796 /dev/nbd10 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.796 1+0 records in 00:08:45.796 1+0 records out 00:08:45.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298368 s, 13.7 MB/s 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:45.796 /dev/nbd11 00:08:45.796 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.055 1+0 records in 00:08:46.055 1+0 records out 00:08:46.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561741 s, 7.3 MB/s 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:46.055 /dev/nbd12 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.055 1+0 records in 00:08:46.055 1+0 records out 00:08:46.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323846 s, 12.6 MB/s 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.055 11:04:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:46.314 /dev/nbd13 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.314 1+0 records in 00:08:46.314 1+0 records out 00:08:46.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640457 s, 6.4 MB/s 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.314 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:46.572 /dev/nbd14 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:46.572 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.573 1+0 records in 00:08:46.573 1+0 records out 00:08:46.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461664 s, 8.9 MB/s 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.573 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd0", 00:08:46.831 "bdev_name": "Nvme0n1" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd1", 00:08:46.831 "bdev_name": "Nvme1n1p1" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd10", 00:08:46.831 "bdev_name": "Nvme1n1p2" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd11", 00:08:46.831 "bdev_name": "Nvme2n1" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd12", 00:08:46.831 "bdev_name": "Nvme2n2" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd13", 00:08:46.831 "bdev_name": "Nvme2n3" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd14", 00:08:46.831 "bdev_name": "Nvme3n1" 00:08:46.831 } 00:08:46.831 ]' 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd0", 00:08:46.831 "bdev_name": "Nvme0n1" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd1", 00:08:46.831 "bdev_name": "Nvme1n1p1" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd10", 00:08:46.831 "bdev_name": "Nvme1n1p2" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd11", 00:08:46.831 "bdev_name": "Nvme2n1" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd12", 00:08:46.831 "bdev_name": "Nvme2n2" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd13", 00:08:46.831 "bdev_name": "Nvme2n3" 00:08:46.831 }, 00:08:46.831 { 00:08:46.831 "nbd_device": "/dev/nbd14", 00:08:46.831 "bdev_name": "Nvme3n1" 00:08:46.831 } 00:08:46.831 ]' 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:46.831 /dev/nbd1 00:08:46.831 /dev/nbd10 00:08:46.831 /dev/nbd11 00:08:46.831 /dev/nbd12 00:08:46.831 /dev/nbd13 00:08:46.831 /dev/nbd14' 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:46.831 /dev/nbd1 00:08:46.831 /dev/nbd10 00:08:46.831 /dev/nbd11 00:08:46.831 /dev/nbd12 00:08:46.831 /dev/nbd13 00:08:46.831 /dev/nbd14' 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:46.831 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:46.832 256+0 records in 00:08:46.832 256+0 records out 00:08:46.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119144 s, 88.0 MB/s 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:46.832 256+0 records in 00:08:46.832 256+0 records out 00:08:46.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0755391 s, 13.9 MB/s 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.832 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:47.090 256+0 records in 00:08:47.090 256+0 records out 00:08:47.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745884 s, 14.1 MB/s 00:08:47.090 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.090 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:47.090 256+0 records in 00:08:47.090 256+0 records out 00:08:47.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0725603 s, 14.5 MB/s 00:08:47.090 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.090 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:47.090 256+0 records in 00:08:47.090 256+0 records out 00:08:47.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0739716 s, 14.2 MB/s 00:08:47.090 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.090 11:04:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:47.349 256+0 records in 00:08:47.349 256+0 records out 00:08:47.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0688152 s, 15.2 MB/s 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:47.349 256+0 records in 00:08:47.349 256+0 records out 00:08:47.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707402 s, 14.8 MB/s 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:47.349 256+0 records in 00:08:47.349 256+0 records out 00:08:47.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0723301 s, 14.5 MB/s 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.349 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.608 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.866 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.124 11:04:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.393 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.651 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.909 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:49.168 11:04:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:49.426 malloc_lvol_verify 00:08:49.426 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:49.426 01a0dd74-f7c3-4207-b8ca-451ae5fd214f 00:08:49.684 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:49.684 dc552d29-a2e8-4ced-907d-b841e88b458f 00:08:49.684 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:49.942 /dev/nbd0 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:49.942 mke2fs 1.47.0 (5-Feb-2023) 00:08:49.942 Discarding device blocks: 0/4096 done 00:08:49.942 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:49.942 00:08:49.942 Allocating group tables: 0/1 done 00:08:49.942 Writing inode tables: 0/1 done 00:08:49.942 Creating journal (1024 blocks): done 00:08:49.942 Writing superblocks and filesystem accounting information: 0/1 done 00:08:49.942 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.942 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73768 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73768 ']' 00:08:50.200 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73768 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73768 00:08:50.201 killing process with pid 73768 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73768' 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73768 00:08:50.201 11:04:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73768 00:08:50.477 11:04:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:50.477 00:08:50.477 real 0m10.076s 00:08:50.477 user 0m14.712s 00:08:50.477 sys 0m3.503s 00:08:50.477 11:04:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.477 11:04:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:50.477 ************************************ 00:08:50.477 END TEST bdev_nbd 00:08:50.477 ************************************ 00:08:50.477 11:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:50.477 11:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:50.477 11:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:50.477 skipping fio tests on NVMe due to multi-ns failures. 00:08:50.477 11:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:50.477 11:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:50.477 11:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:50.477 11:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:50.477 11:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.477 11:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.477 ************************************ 00:08:50.477 START TEST bdev_verify 00:08:50.477 ************************************ 00:08:50.477 11:04:19 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:50.477 [2024-11-27 11:04:19.270876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:50.477 [2024-11-27 11:04:19.270998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74172 ] 00:08:50.735 [2024-11-27 11:04:19.418613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.735 [2024-11-27 11:04:19.453419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.735 [2024-11-27 11:04:19.453465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.994 Running I/O for 5 seconds... 00:08:53.304 21504.00 IOPS, 84.00 MiB/s [2024-11-27T11:04:23.121Z] 21504.00 IOPS, 84.00 MiB/s [2024-11-27T11:04:24.547Z] 22101.33 IOPS, 86.33 MiB/s [2024-11-27T11:04:25.135Z] 23008.00 IOPS, 89.88 MiB/s [2024-11-27T11:04:25.135Z] 23257.60 IOPS, 90.85 MiB/s 00:08:56.252 Latency(us) 00:08:56.252 [2024-11-27T11:04:25.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.252 Verification LBA range: start 0x0 length 0xbd0bd 00:08:56.252 Nvme0n1 : 5.04 1599.24 6.25 0.00 0.00 79750.81 16333.59 80659.69 00:08:56.252 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.252 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:56.252 Nvme0n1 : 5.06 1669.41 6.52 0.00 0.00 76469.25 12552.66 81466.29 00:08:56.252 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.252 Verification LBA range: start 0x0 length 0x4ff80 00:08:56.252 Nvme1n1p1 : 5.04 1598.74 6.25 0.00 0.00 79660.87 16131.94 72593.72 00:08:56.252 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.252 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:56.252 Nvme1n1p1 : 5.06 1668.93 6.52 0.00 0.00 76315.59 14518.74 70577.23 00:08:56.252 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.252 Verification LBA range: start 0x0 length 0x4ff7f 00:08:56.252 Nvme1n1p2 : 5.06 1605.31 6.27 0.00 0.00 79196.46 7208.96 67754.14 00:08:56.253 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:56.253 Nvme1n1p2 : 5.06 1668.43 6.52 0.00 0.00 76189.78 14216.27 64931.05 00:08:56.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x0 length 0x80000 00:08:56.253 Nvme2n1 : 5.07 1604.24 6.27 0.00 0.00 79065.13 9628.75 66140.95 00:08:56.253 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x80000 length 0x80000 00:08:56.253 Nvme2n1 : 5.07 1667.32 6.51 0.00 0.00 76067.48 13812.97 69367.34 00:08:56.253 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x0 length 0x80000 00:08:56.253 Nvme2n2 : 5.08 1613.47 6.30 0.00 0.00 78608.15 6024.27 66544.25 00:08:56.253 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x80000 length 0x80000 00:08:56.253 Nvme2n2 : 5.07 1666.15 6.51 0.00 0.00 75954.55 13308.85 71383.83 00:08:56.253 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x0 length 0x80000 00:08:56.253 Nvme2n3 : 5.08 1613.03 6.30 0.00 0.00 78470.98 6402.36 66947.54 00:08:56.253 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x80000 length 0x80000 00:08:56.253 Nvme2n3 : 5.08 1675.67 6.55 0.00 0.00 75428.14 2255.95 73400.32 00:08:56.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x0 length 0x20000 00:08:56.253 Nvme3n1 : 5.08 1612.59 6.30 0.00 0.00 78316.59 6654.42 69367.34 00:08:56.253 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:56.253 Verification LBA range: start 0x20000 length 0x20000 00:08:56.253 Nvme3n1 : 5.09 1685.17 6.58 0.00 0.00 74900.99 5192.47 72997.02 00:08:56.253 [2024-11-27T11:04:25.136Z] =================================================================================================================== 00:08:56.253 [2024-11-27T11:04:25.136Z] Total : 22947.71 89.64 0.00 0.00 77422.11 2255.95 81466.29 00:08:56.818 00:08:56.818 real 0m6.443s 00:08:56.818 user 0m12.184s 00:08:56.818 sys 0m0.195s 00:08:56.818 11:04:25 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.818 11:04:25 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 ************************************ 00:08:56.818 END TEST bdev_verify 00:08:56.818 ************************************ 00:08:56.818 11:04:25 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:56.818 11:04:25 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:56.818 11:04:25 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.818 11:04:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 ************************************ 00:08:56.818 START TEST bdev_verify_big_io 00:08:56.818 ************************************ 00:08:56.818 11:04:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:57.076 [2024-11-27 11:04:25.751488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.076 [2024-11-27 11:04:25.751596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74259 ] 00:08:57.076 [2024-11-27 11:04:25.898942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.076 [2024-11-27 11:04:25.934110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.076 [2024-11-27 11:04:25.934159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.640 Running I/O for 5 seconds... 00:09:03.717 1598.00 IOPS, 99.88 MiB/s [2024-11-27T11:04:32.858Z] 3561.50 IOPS, 222.59 MiB/s 00:09:03.975 Latency(us) 00:09:03.975 [2024-11-27T11:04:32.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.975 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.975 Verification LBA range: start 0x0 length 0xbd0b 00:09:03.975 Nvme0n1 : 5.87 98.36 6.15 0.00 0.00 1242112.17 17442.66 1432516.14 00:09:03.975 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:03.976 Nvme0n1 : 5.94 107.78 6.74 0.00 0.00 1124823.91 10788.23 1438968.91 00:09:03.976 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x0 length 0x4ff8 00:09:03.976 Nvme1n1p1 : 6.26 66.45 4.15 0.00 0.00 1748101.06 101227.91 2181038.08 00:09:03.976 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:03.976 Nvme1n1p1 : 5.94 122.38 7.65 0.00 0.00 973229.09 89532.26 1226027.32 00:09:03.976 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x0 length 0x4ff7 00:09:03.976 Nvme1n1p2 : 6.12 105.18 6.57 0.00 0.00 1077946.02 135508.28 1187310.67 00:09:03.976 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:03.976 Nvme1n1p2 : 6.07 114.49 7.16 0.00 0.00 986466.88 114536.76 1613193.85 00:09:03.976 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x0 length 0x8000 00:09:03.976 Nvme2n1 : 6.12 108.74 6.80 0.00 0.00 1018944.46 87919.06 1206669.00 00:09:03.976 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x8000 length 0x8000 00:09:03.976 Nvme2n1 : 6.07 118.23 7.39 0.00 0.00 933382.05 115343.36 1632552.17 00:09:03.976 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x0 length 0x8000 00:09:03.976 Nvme2n2 : 6.16 114.24 7.14 0.00 0.00 943889.83 34885.32 1232480.10 00:09:03.976 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x8000 length 0x8000 00:09:03.976 Nvme2n2 : 6.16 127.04 7.94 0.00 0.00 845854.01 33473.77 1645457.72 00:09:03.976 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x0 length 0x8000 00:09:03.976 Nvme2n3 : 6.26 118.80 7.42 0.00 0.00 872401.52 46379.32 1251838.42 00:09:03.976 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x8000 length 0x8000 00:09:03.976 Nvme2n3 : 6.26 130.61 8.16 0.00 0.00 785643.67 44362.83 1690627.15 00:09:03.976 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x0 length 0x2000 00:09:03.976 Nvme3n1 : 6.31 137.65 8.60 0.00 0.00 733957.46 2205.54 1277649.53 00:09:03.976 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:03.976 Verification LBA range: start 0x2000 length 0x2000 00:09:03.976 Nvme3n1 : 6.32 154.23 9.64 0.00 0.00 649144.74 645.91 1716438.25 00:09:03.976 [2024-11-27T11:04:32.859Z] =================================================================================================================== 00:09:03.976 [2024-11-27T11:04:32.859Z] Total : 1624.16 101.51 0.00 0.00 951245.88 645.91 2181038.08 00:09:05.872 00:09:05.872 real 0m8.595s 00:09:05.872 user 0m16.402s 00:09:05.872 sys 0m0.231s 00:09:05.872 11:04:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.872 11:04:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:05.872 ************************************ 00:09:05.872 END TEST bdev_verify_big_io 00:09:05.872 ************************************ 00:09:05.872 11:04:34 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:05.872 11:04:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:05.872 11:04:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.873 11:04:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 ************************************ 00:09:05.873 START TEST bdev_write_zeroes 00:09:05.873 ************************************ 00:09:05.873 11:04:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:05.873 [2024-11-27 11:04:34.374389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:05.873 [2024-11-27 11:04:34.374475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74363 ] 00:09:05.873 [2024-11-27 11:04:34.516348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.873 [2024-11-27 11:04:34.549751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.130 Running I/O for 1 seconds... 00:09:07.498 46847.00 IOPS, 183.00 MiB/s 00:09:07.498 Latency(us) 00:09:07.498 [2024-11-27T11:04:36.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.498 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme0n1 : 1.02 5949.26 23.24 0.00 0.00 21467.89 4864.79 152446.82 00:09:07.498 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme1n1p1 : 1.02 6943.60 27.12 0.00 0.00 18372.42 8922.98 123409.33 00:09:07.498 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme1n1p2 : 1.02 6810.09 26.60 0.00 0.00 18698.47 8771.74 124215.93 00:09:07.498 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme2n1 : 1.03 6802.40 26.57 0.00 0.00 18686.74 8822.15 123409.33 00:09:07.498 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme2n2 : 1.03 6794.68 26.54 0.00 0.00 18674.75 9477.51 118569.75 00:09:07.498 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme2n3 : 1.03 6787.03 26.51 0.00 0.00 18666.02 9427.10 118569.75 00:09:07.498 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:07.498 Nvme3n1 : 1.03 6903.79 26.97 0.00 0.00 18326.86 9376.69 118569.75 00:09:07.498 [2024-11-27T11:04:36.381Z] =================================================================================================================== 00:09:07.498 [2024-11-27T11:04:36.381Z] Total : 46990.85 183.56 0.00 0.00 18935.14 4864.79 152446.82 00:09:07.498 00:09:07.498 real 0m1.832s 00:09:07.498 user 0m1.546s 00:09:07.498 sys 0m0.176s 00:09:07.498 11:04:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.498 11:04:36 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:07.498 ************************************ 00:09:07.498 END TEST bdev_write_zeroes 00:09:07.498 ************************************ 00:09:07.498 11:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.498 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:07.498 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.498 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.498 ************************************ 00:09:07.498 START TEST bdev_json_nonenclosed 00:09:07.498 ************************************ 00:09:07.498 11:04:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.498 [2024-11-27 11:04:36.246668] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:07.498 [2024-11-27 11:04:36.246792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74400 ] 00:09:07.755 [2024-11-27 11:04:36.396868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.755 [2024-11-27 11:04:36.431125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.755 [2024-11-27 11:04:36.431207] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:07.755 [2024-11-27 11:04:36.431223] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:07.755 [2024-11-27 11:04:36.431237] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.755 00:09:07.755 real 0m0.320s 00:09:07.755 user 0m0.122s 00:09:07.755 sys 0m0.095s 00:09:07.755 11:04:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.755 ************************************ 00:09:07.755 END TEST bdev_json_nonenclosed 00:09:07.755 ************************************ 00:09:07.755 11:04:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:07.756 11:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.756 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:07.756 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.756 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.756 ************************************ 00:09:07.756 START TEST bdev_json_nonarray 00:09:07.756 ************************************ 00:09:07.756 11:04:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.756 [2024-11-27 11:04:36.604729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:07.756 [2024-11-27 11:04:36.604843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74425 ] 00:09:08.014 [2024-11-27 11:04:36.753171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.014 [2024-11-27 11:04:36.787293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.014 [2024-11-27 11:04:36.787381] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:08.014 [2024-11-27 11:04:36.787397] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:08.014 [2024-11-27 11:04:36.787407] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:08.014 00:09:08.014 real 0m0.320s 00:09:08.014 user 0m0.128s 00:09:08.014 sys 0m0.089s 00:09:08.014 ************************************ 00:09:08.014 END TEST bdev_json_nonarray 00:09:08.014 ************************************ 00:09:08.014 11:04:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.014 11:04:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:08.273 11:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:08.273 11:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:08.273 11:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:08.273 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.273 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.273 11:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.273 ************************************ 00:09:08.273 START TEST bdev_gpt_uuid 00:09:08.273 ************************************ 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74445 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74445 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74445 ']' 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.273 11:04:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:08.273 [2024-11-27 11:04:36.977739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:08.273 [2024-11-27 11:04:36.977853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74445 ] 00:09:08.273 [2024-11-27 11:04:37.125908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.531 [2024-11-27 11:04:37.160189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.132 11:04:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.132 11:04:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:09:09.132 11:04:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:09.132 11:04:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.132 11:04:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:09.390 Some configs were skipped because the RPC state that can call them passed over. 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.390 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:09.390 { 00:09:09.390 "name": "Nvme1n1p1", 00:09:09.390 "aliases": [ 00:09:09.390 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:09.390 ], 00:09:09.390 "product_name": "GPT Disk", 00:09:09.390 "block_size": 4096, 00:09:09.390 "num_blocks": 655104, 00:09:09.390 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:09.390 "assigned_rate_limits": { 00:09:09.390 "rw_ios_per_sec": 0, 00:09:09.390 "rw_mbytes_per_sec": 0, 00:09:09.390 "r_mbytes_per_sec": 0, 00:09:09.390 "w_mbytes_per_sec": 0 00:09:09.390 }, 00:09:09.390 "claimed": false, 00:09:09.390 "zoned": false, 00:09:09.390 "supported_io_types": { 00:09:09.390 "read": true, 00:09:09.390 "write": true, 00:09:09.390 "unmap": true, 00:09:09.390 "flush": true, 00:09:09.390 "reset": true, 00:09:09.390 "nvme_admin": false, 00:09:09.390 "nvme_io": false, 00:09:09.390 "nvme_io_md": false, 00:09:09.390 "write_zeroes": true, 00:09:09.390 "zcopy": false, 00:09:09.390 "get_zone_info": false, 00:09:09.390 "zone_management": false, 00:09:09.390 "zone_append": false, 00:09:09.390 "compare": true, 00:09:09.390 "compare_and_write": false, 00:09:09.391 "abort": true, 00:09:09.391 "seek_hole": false, 00:09:09.391 "seek_data": false, 00:09:09.391 "copy": true, 00:09:09.391 "nvme_iov_md": false 00:09:09.391 }, 00:09:09.391 "driver_specific": { 00:09:09.391 "gpt": { 00:09:09.391 "base_bdev": "Nvme1n1", 00:09:09.391 "offset_blocks": 256, 00:09:09.391 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:09.391 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:09.391 "partition_name": "SPDK_TEST_first" 00:09:09.391 } 00:09:09.391 } 00:09:09.391 } 00:09:09.391 ]' 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:09.391 { 00:09:09.391 "name": "Nvme1n1p2", 00:09:09.391 "aliases": [ 00:09:09.391 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:09.391 ], 00:09:09.391 "product_name": "GPT Disk", 00:09:09.391 "block_size": 4096, 00:09:09.391 "num_blocks": 655103, 00:09:09.391 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:09.391 "assigned_rate_limits": { 00:09:09.391 "rw_ios_per_sec": 0, 00:09:09.391 "rw_mbytes_per_sec": 0, 00:09:09.391 "r_mbytes_per_sec": 0, 00:09:09.391 "w_mbytes_per_sec": 0 00:09:09.391 }, 00:09:09.391 "claimed": false, 00:09:09.391 "zoned": false, 00:09:09.391 "supported_io_types": { 00:09:09.391 "read": true, 00:09:09.391 "write": true, 00:09:09.391 "unmap": true, 00:09:09.391 "flush": true, 00:09:09.391 "reset": true, 00:09:09.391 "nvme_admin": false, 00:09:09.391 "nvme_io": false, 00:09:09.391 "nvme_io_md": false, 00:09:09.391 "write_zeroes": true, 00:09:09.391 "zcopy": false, 00:09:09.391 "get_zone_info": false, 00:09:09.391 "zone_management": false, 00:09:09.391 "zone_append": false, 00:09:09.391 "compare": true, 00:09:09.391 "compare_and_write": false, 00:09:09.391 "abort": true, 00:09:09.391 "seek_hole": false, 00:09:09.391 "seek_data": false, 00:09:09.391 "copy": true, 00:09:09.391 "nvme_iov_md": false 00:09:09.391 }, 00:09:09.391 "driver_specific": { 00:09:09.391 "gpt": { 00:09:09.391 "base_bdev": "Nvme1n1", 00:09:09.391 "offset_blocks": 655360, 00:09:09.391 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:09.391 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:09.391 "partition_name": "SPDK_TEST_second" 00:09:09.391 } 00:09:09.391 } 00:09:09.391 } 00:09:09.391 ]' 00:09:09.391 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74445 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74445 ']' 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74445 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74445 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.650 killing process with pid 74445 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74445' 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74445 00:09:09.650 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74445 00:09:09.908 00:09:09.908 real 0m1.757s 00:09:09.908 user 0m1.908s 00:09:09.908 sys 0m0.347s 00:09:09.908 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.908 11:04:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:09.908 ************************************ 00:09:09.908 END TEST bdev_gpt_uuid 00:09:09.908 ************************************ 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:09.908 11:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:10.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.425 Waiting for block devices as requested 00:09:10.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.425 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.425 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.681 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.958 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:15.958 11:04:44 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:15.958 11:04:44 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:15.958 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:15.958 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:15.958 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:15.958 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:15.958 11:04:44 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:15.958 00:09:15.958 real 0m48.501s 00:09:15.958 user 1m2.486s 00:09:15.958 sys 0m7.504s 00:09:15.959 11:04:44 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.959 ************************************ 00:09:15.959 END TEST blockdev_nvme_gpt 00:09:15.959 ************************************ 00:09:15.959 11:04:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.959 11:04:44 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:15.959 11:04:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.959 11:04:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.959 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:15.959 ************************************ 00:09:15.959 START TEST nvme 00:09:15.959 ************************************ 00:09:15.959 11:04:44 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:15.959 * Looking for test storage... 00:09:15.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:15.959 11:04:44 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:15.959 11:04:44 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:09:15.959 11:04:44 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:16.219 11:04:44 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.219 11:04:44 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.219 11:04:44 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.219 11:04:44 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.219 11:04:44 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.219 11:04:44 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.219 11:04:44 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:16.219 11:04:44 nvme -- scripts/common.sh@345 -- # : 1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.219 11:04:44 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.219 11:04:44 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@353 -- # local d=1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.219 11:04:44 nvme -- scripts/common.sh@355 -- # echo 1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.219 11:04:44 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@353 -- # local d=2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.219 11:04:44 nvme -- scripts/common.sh@355 -- # echo 2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.219 11:04:44 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.219 11:04:44 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.219 11:04:44 nvme -- scripts/common.sh@368 -- # return 0 00:09:16.219 11:04:44 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.219 11:04:44 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:16.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.219 --rc genhtml_branch_coverage=1 00:09:16.219 --rc genhtml_function_coverage=1 00:09:16.219 --rc genhtml_legend=1 00:09:16.219 --rc geninfo_all_blocks=1 00:09:16.219 --rc geninfo_unexecuted_blocks=1 00:09:16.219 00:09:16.219 ' 00:09:16.219 11:04:44 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:16.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.219 --rc genhtml_branch_coverage=1 00:09:16.219 --rc genhtml_function_coverage=1 00:09:16.219 --rc genhtml_legend=1 00:09:16.219 --rc geninfo_all_blocks=1 00:09:16.219 --rc geninfo_unexecuted_blocks=1 00:09:16.219 00:09:16.219 ' 00:09:16.219 11:04:44 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:16.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.219 --rc genhtml_branch_coverage=1 00:09:16.219 --rc genhtml_function_coverage=1 00:09:16.219 --rc genhtml_legend=1 00:09:16.219 --rc geninfo_all_blocks=1 00:09:16.219 --rc geninfo_unexecuted_blocks=1 00:09:16.219 00:09:16.219 ' 00:09:16.219 11:04:44 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:16.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.219 --rc genhtml_branch_coverage=1 00:09:16.219 --rc genhtml_function_coverage=1 00:09:16.219 --rc genhtml_legend=1 00:09:16.219 --rc geninfo_all_blocks=1 00:09:16.219 --rc geninfo_unexecuted_blocks=1 00:09:16.219 00:09:16.219 ' 00:09:16.219 11:04:44 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:16.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:17.082 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.082 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.082 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.082 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.082 11:04:45 nvme -- nvme/nvme.sh@79 -- # uname 00:09:17.082 11:04:45 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:17.082 11:04:45 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:17.082 11:04:45 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:09:17.082 Waiting for stub to ready for secondary processes... 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1071 -- # stubpid=75069 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75069 ]] 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:17.082 11:04:45 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:17.343 [2024-11-27 11:04:45.982092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:17.343 [2024-11-27 11:04:45.982208] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:17.916 [2024-11-27 11:04:46.706642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.916 [2024-11-27 11:04:46.727122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.916 [2024-11-27 11:04:46.727630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.916 [2024-11-27 11:04:46.727691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.916 [2024-11-27 11:04:46.743524] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:17.916 [2024-11-27 11:04:46.743627] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.916 [2024-11-27 11:04:46.755514] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:17.916 [2024-11-27 11:04:46.755761] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:17.916 [2024-11-27 11:04:46.756229] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.916 [2024-11-27 11:04:46.756457] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:17.916 [2024-11-27 11:04:46.756560] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:17.916 [2024-11-27 11:04:46.756977] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.916 [2024-11-27 11:04:46.757269] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:17.916 [2024-11-27 11:04:46.757310] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:17.916 [2024-11-27 11:04:46.758936] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.916 [2024-11-27 11:04:46.759227] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:17.916 [2024-11-27 11:04:46.759302] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:17.916 [2024-11-27 11:04:46.759415] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:17.916 [2024-11-27 11:04:46.759542] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:18.177 done. 00:09:18.177 11:04:46 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:18.177 11:04:46 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:09:18.177 11:04:46 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:18.177 11:04:46 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:09:18.177 11:04:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.177 11:04:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.177 ************************************ 00:09:18.177 START TEST nvme_reset 00:09:18.177 ************************************ 00:09:18.177 11:04:46 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:18.437 Initializing NVMe Controllers 00:09:18.437 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:18.437 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:18.437 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:18.437 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:18.437 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:18.437 00:09:18.437 real 0m0.203s 00:09:18.437 user 0m0.058s 00:09:18.437 sys 0m0.096s 00:09:18.437 11:04:47 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.437 11:04:47 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:18.437 ************************************ 00:09:18.437 END TEST nvme_reset 00:09:18.437 ************************************ 00:09:18.437 11:04:47 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:18.437 11:04:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.437 11:04:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.437 11:04:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.437 ************************************ 00:09:18.437 START TEST nvme_identify 00:09:18.437 ************************************ 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:09:18.437 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:18.437 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:18.437 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:18.437 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:18.437 11:04:47 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:18.437 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:18.701 ===================================================== 00:09:18.701 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.701 ===================================================== 00:09:18.701 Controller Capabilities/Features 00:09:18.701 ================================ 00:09:18.701 Vendor ID: 1b36 00:09:18.701 Subsystem Vendor ID: 1af4 00:09:18.701 Serial Number: 12343 00:09:18.701 Model Number: QEMU NVMe Ctrl 00:09:18.701 Firmware Version: 8.0.0 00:09:18.701 Recommended Arb Burst: 6 00:09:18.701 IEEE OUI Identifier: 00 54 52 00:09:18.701 Multi-path I/O 00:09:18.701 May have multiple subsystem ports: No 00:09:18.701 May have multiple controllers: Yes 00:09:18.701 Associated with SR-IOV VF: No 00:09:18.701 Max Data Transfer Size: 524288 00:09:18.701 Max Number of Namespaces: 256 00:09:18.701 Max Number of I/O Queues: 64 00:09:18.701 NVMe Specification Version (VS): 1.4 00:09:18.701 NVMe Specification Version (Identify): 1.4 00:09:18.701 Maximum Queue Entries: 2048 00:09:18.701 Contiguous Queues Required: Yes 00:09:18.701 Arbitration Mechanisms Supported 00:09:18.701 Weighted Round Robin: Not Supported 00:09:18.701 Vendor Specific: Not Supported 00:09:18.701 Reset Timeout: 7500 ms 00:09:18.701 Doorbell Stride: 4 bytes 00:09:18.701 NVM Subsystem Reset: Not Supported 00:09:18.701 Command Sets Supported 00:09:18.701 NVM Command Set: Supported 00:09:18.701 Boot Partition: Not Supported 00:09:18.701 Memory Page Size Minimum: 4096 bytes 00:09:18.701 Memory Page Size Maximum: 65536 bytes 00:09:18.701 Persistent Memory Region: Not Supported 00:09:18.701 Optional Asynchronous Events Supported 00:09:18.701 Namespace Attribute Notices: Supported 00:09:18.701 Firmware Activation Notices: Not Supported 00:09:18.701 ANA Change Notices: Not Supported 00:09:18.701 PLE Aggregate Log Change Notices: Not Supported 00:09:18.701 LBA Status Info Alert Notices: Not Supported 00:09:18.701 EGE Aggregate Log Change Notices: Not Supported 00:09:18.701 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.701 Zone Descriptor Change Notices: Not Supported 00:09:18.701 Discovery Log Change Notices: Not Supported 00:09:18.701 Controller Attributes 00:09:18.701 128-bit Host Identifier: Not Supported 00:09:18.701 Non-Operational Permissive Mode: Not Supported 00:09:18.701 NVM Sets: Not Supported 00:09:18.701 Read Recovery Levels: Not Supported 00:09:18.701 Endurance Groups: Supported 00:09:18.701 Predictable Latency Mode: Not Supported 00:09:18.701 Traffic Based Keep ALive: Not Supported 00:09:18.701 Namespace Granularity: Not Supported 00:09:18.701 SQ Associations: Not Supported 00:09:18.701 UUID List: Not Supported 00:09:18.701 Multi-Domain Subsystem: Not Supported 00:09:18.701 Fixed Capacity Management: Not Supported 00:09:18.701 Variable Capacity Management: Not Supported 00:09:18.701 Delete Endurance Group: Not Supported 00:09:18.701 Delete NVM Set: Not Supported 00:09:18.701 Extended LBA Formats Supported: Supported 00:09:18.701 Flexible Data Placement Supported: Supported 00:09:18.701 00:09:18.701 Controller Memory Buffer Support 00:09:18.701 ================================ 00:09:18.701 Supported: No 00:09:18.701 00:09:18.701 Persistent Memory Region Support 00:09:18.701 ================================ 00:09:18.701 Supported: No 00:09:18.701 00:09:18.701 Admin Command Set Attributes 00:09:18.701 ============================ 00:09:18.701 Security Send/Receive: Not Supported 00:09:18.701 Format NVM: Supported 00:09:18.701 Firmware Activate/Download: Not Supported 00:09:18.701 Namespace Management: Supported 00:09:18.701 Device Self-Test: Not Supported 00:09:18.701 Directives: Supported 00:09:18.701 NVMe-MI: Not Supported 00:09:18.701 Virtualization Management: Not Supported 00:09:18.701 Doorbell Buffer Config: Supported 00:09:18.701 Get LBA Status Capability: Not Supported 00:09:18.701 Command & Feature Lockdown Capability: Not Supported 00:09:18.701 Abort Command Limit: 4 00:09:18.701 Async Event Request Limit: 4 00:09:18.701 Number of Firmware Slots: N/A 00:09:18.701 Firmware Slot 1 Read-Only: N/A 00:09:18.701 Firmware Activation Without Reset: N/A 00:09:18.701 Multiple Update Detection Support: N/A 00:09:18.701 Firmware Update Granularity: No Information Provided 00:09:18.701 Per-Namespace SMART Log: Yes 00:09:18.701 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.701 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:18.701 Command Effects Log Page: Supported 00:09:18.701 Get Log Page Extended Data: Supported 00:09:18.701 Telemetry Log Pages: Not Supported 00:09:18.701 Persistent Event Log Pages: Not Supported 00:09:18.701 Supported Log Pages Log Page: May Support 00:09:18.701 Commands Supported & Effects Log Page: Not Supported 00:09:18.701 Feature Identifiers & Effects Log Page:May Support 00:09:18.701 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.701 Data Area 4 for Telemetry Log: Not Supported 00:09:18.701 Error Log Page Entries Supported: 1 00:09:18.701 Keep Alive: Not Supported 00:09:18.701 00:09:18.701 NVM Command Set Attributes 00:09:18.701 ========================== 00:09:18.701 Submission Queue Entry Size 00:09:18.701 Max: 64 00:09:18.701 Min: 64 00:09:18.701 Completion Queue Entry Size 00:09:18.701 Max: 16 00:09:18.701 Min: 16 00:09:18.701 Number of Namespaces: 256 00:09:18.701 Compare Command: Supported 00:09:18.701 Write Uncorrectable Command: Not Supported 00:09:18.701 Dataset Management Command: Supported 00:09:18.701 Write Zeroes Command: Supported 00:09:18.701 Set Features Save Field: Supported 00:09:18.701 Reservations: Not Supported 00:09:18.701 Timestamp: Supported 00:09:18.701 Copy: Supported 00:09:18.701 Volatile Write Cache: Present 00:09:18.701 Atomic Write Unit (Normal): 1 00:09:18.701 Atomic Write Unit (PFail): 1 00:09:18.701 Atomic Compare & Write Unit: 1 00:09:18.701 Fused Compare & Write: Not Supported 00:09:18.701 Scatter-Gather List 00:09:18.701 SGL Command Set: Supported 00:09:18.701 SGL Keyed: Not Supported 00:09:18.701 SGL Bit Bucket Descriptor: Not Supported 00:09:18.701 SGL Metadata Pointer: Not Supported 00:09:18.701 Oversized SGL: Not Supported 00:09:18.701 SGL Metadata Address: Not Supported 00:09:18.701 SGL Offset: Not Supported 00:09:18.701 Transport SGL Data Block: Not Supported 00:09:18.701 Replay Protected Memory Block: Not Supported 00:09:18.701 00:09:18.701 Firmware Slot Information 00:09:18.701 ========================= 00:09:18.701 Active slot: 1 00:09:18.701 Slot 1 Firmware Revision: 1.0 00:09:18.701 00:09:18.701 00:09:18.701 Commands Supported and Effects 00:09:18.701 ============================== 00:09:18.701 Admin Commands 00:09:18.701 -------------- 00:09:18.701 Delete I/O Submission Queue (00h): Supported 00:09:18.701 Create I/O Submission Queue (01h): Supported 00:09:18.701 Get Log Page (02h): Supported 00:09:18.701 Delete I/O Completion Queue (04h): Supported 00:09:18.701 Create I/O Completion Queue (05h): Supported 00:09:18.701 Identify (06h): Supported 00:09:18.701 Abort (08h): Supported 00:09:18.701 Set Features (09h): Supported 00:09:18.701 Get Features (0Ah): Supported 00:09:18.701 Asynchronous Event Request (0Ch): Supported 00:09:18.701 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.701 Directive Send (19h): Supported 00:09:18.701 Directive Receive (1Ah): Supported 00:09:18.701 Virtualization Management (1Ch): Supported 00:09:18.701 Doorbell Buffer Config (7Ch): Supported 00:09:18.701 Format NVM (80h): Supported LBA-Change 00:09:18.701 I/O Commands 00:09:18.701 ------------ 00:09:18.701 Flush (00h): Supported LBA-Change 00:09:18.701 Write (01h): Supported LBA-Change 00:09:18.701 Read (02h): Supported 00:09:18.701 Compare (05h): Supported 00:09:18.701 Write Zeroes (08h): Supported LBA-Change 00:09:18.701 Dataset Management (09h): Supported LBA-Change 00:09:18.701 Unknown (0Ch): Supported 00:09:18.701 Unknown (12h): Supported 00:09:18.701 Copy (19h): Supported LBA-Change 00:09:18.701 Unknown (1Dh): Supported LBA-Change 00:09:18.701 00:09:18.701 Error Log 00:09:18.701 ========= 00:09:18.701 00:09:18.701 Arbitration 00:09:18.701 =========== 00:09:18.701 Arbitration Burst: no limit 00:09:18.701 00:09:18.701 Power Management 00:09:18.701 ================ 00:09:18.701 Number of Power States: 1 00:09:18.701 Current Power State: Power State #0 00:09:18.701 Power State #0: 00:09:18.701 Max Power: 25.00 W 00:09:18.702 Non-Operational State: Operational 00:09:18.702 Entry Latency: 16 microseconds 00:09:18.702 Exit Latency: 4 microseconds 00:09:18.702 Relative Read Throughput: 0 00:09:18.702 Relative Read Latency: 0 00:09:18.702 Relative Write Throughput: 0 00:09:18.702 Relative Write Latency: 0 00:09:18.702 Idle Power: Not Reported 00:09:18.702 Active Power: Not Reported 00:09:18.702 Non-Operational Permissive Mode: Not Supported 00:09:18.702 00:09:18.702 Health Information 00:09:18.702 ================== 00:09:18.702 Critical Warnings: 00:09:18.702 Available Spare Space: OK 00:09:18.702 Temperature: OK 00:09:18.702 Device Reliability: OK 00:09:18.702 Read Only: No 00:09:18.702 Volatile Memory Backup: OK 00:09:18.702 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.702 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.702 Available Spare: 0% 00:09:18.702 Available Spare Threshold: 0% 00:09:18.702 Life Percentage Used: 0% 00:09:18.702 Data Units Read: 831 00:09:18.702 Data Units Written: 760 00:09:18.702 Host Read Commands: 40833 00:09:18.702 Host Write Commands: 40256 00:09:18.702 Controller Busy Time: 0 minutes 00:09:18.702 Power Cycles: 0 00:09:18.702 Power On Hours: 0 hours 00:09:18.702 Unsafe Shutdowns: 0 00:09:18.702 Unrecoverable Media Errors: 0 00:09:18.702 Lifetime Error Log Entries: 0 00:09:18.702 Warning Temperature Time: 0 minutes 00:09:18.702 Critical Temperature Time: 0 minutes 00:09:18.702 00:09:18.702 Number of Queues 00:09:18.702 ================ 00:09:18.702 Number of I/O Submission Queues: 64 00:09:18.702 Number of I/O Completion Queues: 64 00:09:18.702 00:09:18.702 ZNS Specific Controller Data 00:09:18.702 ============================ 00:09:18.702 Zone Append Size Limit: 0 00:09:18.702 00:09:18.702 00:09:18.702 Active Namespaces 00:09:18.702 ================= 00:09:18.702 Namespace ID:1 00:09:18.702 Error Recovery Timeout: Unlimited 00:09:18.702 Command Set Identifier: NVM (00h) 00:09:18.702 Deallocate: Supported 00:09:18.702 Deallocated/Unwritten Error: Supported 00:09:18.702 Deallocated Read Value: All 0x00 00:09:18.702 Deallocate in Write Zeroes: Not Supported 00:09:18.702 Deallocated Guard Field: 0xFFFF 00:09:18.702 Flush: Supported 00:09:18.702 Reservation: Not Supported 00:09:18.702 Namespace Sharing Capabilities: Multiple Controllers 00:09:18.702 Size (in LBAs): 262144 (1GiB) 00:09:18.702 Capacity (in LBAs): 262144 (1GiB) 00:09:18.702 Utilization (in LBAs): 262144 (1GiB) 00:09:18.702 Thin Provisioning: Not Supported 00:09:18.702 Per-NS Atomic Units: No 00:09:18.702 Maximum Single Source Range Length: 128 00:09:18.702 Maximum Copy Length: 128 00:09:18.702 Maximum Source Range Count: 128 00:09:18.702 NGUID/EUI64 Never Reused: No 00:09:18.702 Namespace Write Protected: No 00:09:18.702 Endurance group ID: 1 00:09:18.702 Number of LBA Formats: 8 00:09:18.702 Current LBA Format: LBA Format #04 00:09:18.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.702 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.702 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.702 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.702 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.702 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.702 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.702 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.702 00:09:18.702 Get Feature FDP: 00:09:18.702 ================ 00:09:18.702 Enabled: Yes 00:09:18.702 FDP configuration index: 0 00:09:18.702 00:09:18.702 FDP configurations log page 00:09:18.702 =========================== 00:09:18.702 Number of FDP configurations: 1 00:09:18.702 Version: 0 00:09:18.702 Size: 112 00:09:18.702 FDP Configuration Descriptor: 0 00:09:18.702 Descriptor Size: 96 00:09:18.702 Reclaim Group Identifier format: 2 00:09:18.702 FDP Volatile Write Cache: Not Present 00:09:18.702 FDP Configuration: Valid 00:09:18.702 Vendor Specific Size: 0 00:09:18.702 Number of Reclaim Groups: 2 00:09:18.702 Number of Recalim Unit Handles: 8 00:09:18.702 Max Placement Identifiers: 128 00:09:18.702 Number of Namespaces Suppprted: 256 00:09:18.702 Reclaim unit Nominal Size: 6000000 bytes 00:09:18.702 Estimated Reclaim Unit Time Limit: Not Reported 00:09:18.702 RUH Desc #000: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #001: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #002: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #003: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #004: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #005: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #006: RUH Type: Initially Isolated 00:09:18.702 RUH Desc #007: RUH Type: Initially Isolated 00:09:18.702 00:09:18.702 FDP reclaim unit handle usage log page 00:09:18.702 ====================================== 00:09:18.702 Number of Reclaim Unit Handles: 8 00:09:18.702 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:18.702 RUH Usage Desc #001: RUH Attributes: Unused 00:09:18.702 RUH Usage Desc #002: RUH Attributes: Unused 00:09:18.702 RUH Usage Desc #003: RUH Attributes: Unused 00:09:18.702 RUH Usage Desc #004: RUH Attributes: Unused 00:09:18.702 RUH Usage Desc #005: RUH Attributes: Unused 00:09:18.702 RUH Usage Desc #006: RUH Attributes: Unused 00:09:18.702 RUH Usage Desc #007: RUH Attributes: Unused 00:09:18.702 00:09:18.702 FDP statistics log page 00:09:18.702 ======================= 00:09:18.702 Host bytes with metadata written: 477208576 00:09:18.702 Media bytes with metadata written: 477261824 00:09:18.702 Media bytes erased: 0 00:09:18.702 00:09:18.702 FDP events log page 00:09:18.702 =================== 00:09:18.702 Number of FDP events: 0 00:09:18.702 00:09:18.702 NVM Specific Namespace Data 00:09:18.702 =========================== 00:09:18.702 Logical Block Storage Tag Mask: 0 00:09:18.702 Protection Information Capabilities: 00:09:18.702 16b Guard Protection Information Storage Tag Support: No 00:09:18.702 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.702 Storage Tag Check Read Support: No 00:09:18.702 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.702 ===================================================== 00:09:18.702 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.702 ===================================================== 00:09:18.702 Controller Capabilities/Features 00:09:18.702 ================================ 00:09:18.702 Vendor ID: 1b36 00:09:18.702 Subsystem Vendor ID: 1af4 00:09:18.702 Serial Number: 12340 00:09:18.702 Model Number: QEMU NVMe Ctrl 00:09:18.702 Firmware Version: 8.0.0 00:09:18.702 Recommended Arb Burst: 6 00:09:18.702 IEEE OUI Identifier: 00 54 52 00:09:18.702 Multi-path I/O 00:09:18.702 May have multiple subsystem ports: No 00:09:18.702 May have multiple controllers: No 00:09:18.702 Associated with SR-IOV VF: No 00:09:18.702 Max Data Transfer Size: 524288 00:09:18.702 Max Number of Namespaces: 256 00:09:18.702 Max Number of I/O Queues: 64 00:09:18.702 NVMe Specification Version (VS): 1.4 00:09:18.702 NVMe Specification Version (Identify): 1.4 00:09:18.702 Maximum Queue Entries: 2048 00:09:18.702 Contiguous Queues Required: Yes 00:09:18.702 Arbitration Mechanisms Supported 00:09:18.702 Weighted Round Robin: Not Supported 00:09:18.702 Vendor Specific: Not Supported 00:09:18.702 Reset Timeout: 7500 ms 00:09:18.702 Doorbell Stride: 4 bytes 00:09:18.702 NVM Subsystem Reset: Not Supported 00:09:18.702 Command Sets Supported 00:09:18.702 NVM Command Set: Supported 00:09:18.702 Boot Partition: Not Supported 00:09:18.702 Memory Page Size Minimum: 4096 bytes 00:09:18.702 Memory Page Size Maximum: 65536 bytes 00:09:18.702 Persistent Memory Region: Not Supported 00:09:18.702 Optional Asynchronous Events Supported 00:09:18.702 Namespace Attribute Notices: Supported 00:09:18.702 Firmware Activation Notices: Not Supported 00:09:18.702 ANA Change Notices: Not Supported 00:09:18.702 PLE Aggregate Log Change Notices: Not Supported 00:09:18.702 LBA Status Info Alert Notices: Not Supported 00:09:18.702 EGE Aggregate Log Change Notices: Not Supported 00:09:18.702 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.702 Zone Descriptor Change Notices: Not Supported 00:09:18.702 Discovery Log Change Notices: Not Supported 00:09:18.702 Controller Attributes 00:09:18.702 128-bit Host Identifier: Not Supported 00:09:18.702 Non-Operational Permissive Mode: Not Supported 00:09:18.702 NVM Sets: Not Supported 00:09:18.702 Read Recovery Levels: Not Supported 00:09:18.703 Endurance Groups: Not Supported 00:09:18.703 Predictable Latency Mode: Not Supported 00:09:18.703 Traffic Based Keep ALive: Not Supported 00:09:18.703 Namespace Granularity: Not Supported 00:09:18.703 SQ Associations: Not Supported 00:09:18.703 UUID List: Not Supported 00:09:18.703 Multi-Domain Subsystem: Not Supported 00:09:18.703 Fixed Capacity Management: Not Supported 00:09:18.703 Variable Capacity Management: Not Supported 00:09:18.703 Delete Endurance Group: Not Supported 00:09:18.703 Delete NVM Set: Not Supported 00:09:18.703 Extended LBA Formats Supported: Supported 00:09:18.703 Flexible Data Placement Supported: Not Supported 00:09:18.703 00:09:18.703 Controller Memory Buffer Support 00:09:18.703 ================================ 00:09:18.703 Supported: No 00:09:18.703 00:09:18.703 Persistent Memory Region Support 00:09:18.703 ================================ 00:09:18.703 Supported: No 00:09:18.703 00:09:18.703 Admin Command Set Attributes 00:09:18.703 ============================ 00:09:18.703 Security Send/Receive: Not Supported 00:09:18.703 Format NVM: Supported 00:09:18.703 Firmware Activate/Download: Not Supported 00:09:18.703 Namespace Management: Supported 00:09:18.703 Device Self-Test: Not Supported 00:09:18.703 Directives: Supported 00:09:18.703 NVMe-MI: Not Supported 00:09:18.703 Virtualization Management: Not Supported 00:09:18.703 Doorbell Buffer Config: Supported 00:09:18.703 Get LBA Status Capability: Not Supported 00:09:18.703 Command & Feature Lockdown Capability: Not Supported 00:09:18.703 Abort Command Limit: 4 00:09:18.703 Async Event Request Limit: 4 00:09:18.703 Number of Firmware Slots: N/A 00:09:18.703 Firmware Slot 1 Read-Only: N/A 00:09:18.703 Firmware Activation Without Reset: N/A 00:09:18.703 Multiple Update Detection Support: N/A 00:09:18.703 Firmware Update Granularity: No Information Provided 00:09:18.703 Per-Namespace SMART Log: Yes 00:09:18.703 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.703 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:18.703 Command Effects Log Page: Supported 00:09:18.703 Get Log Page Extended Data: Supported 00:09:18.703 Telemetry Log Pages: Not Supported 00:09:18.703 Persistent Event Log Pages: Not Supported 00:09:18.703 Supported Log Pages Log Page: May Support 00:09:18.703 Commands Supported & Effects Log Page: Not Supported 00:09:18.703 Feature Identifiers & Effects Log Page:May Support 00:09:18.703 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.703 Data Area 4 for Telemetry Log: Not Supported 00:09:18.703 Error Log Page Entries Supported: 1 00:09:18.703 Keep Alive: Not Supported 00:09:18.703 00:09:18.703 NVM Command Set Attributes 00:09:18.703 ========================== 00:09:18.703 Submission Queue Entry Size 00:09:18.703 Max: 64 00:09:18.703 Min: 64 00:09:18.703 Completion Queue Entry Size 00:09:18.703 Max: 16 00:09:18.703 Min: 16 00:09:18.703 Number of Namespaces: 256 00:09:18.703 Compare Command: Supported 00:09:18.703 Write Uncorrectable Command: Not Supported 00:09:18.703 Dataset Management Command: Supported 00:09:18.703 Write Zeroes Command: Supported 00:09:18.703 Set Features Save Field: Supported 00:09:18.703 Reservations: Not Supported 00:09:18.703 Timestamp: Supported 00:09:18.703 Copy: Supported 00:09:18.703 Volatile Write Cache: Present 00:09:18.703 Atomic Write Unit (Normal): 1 00:09:18.703 Atomic Write Unit (PFail): 1 00:09:18.703 Atomic Compare & Write Unit: 1 00:09:18.703 Fused Compare & Write: Not Supported 00:09:18.703 Scatter-Gather List 00:09:18.703 SGL Command Set: Supported 00:09:18.703 SGL Keyed: Not Supported 00:09:18.703 SGL Bit Bucket Descriptor: Not Supported 00:09:18.703 SGL Metadata Pointer: Not Supported 00:09:18.703 Oversized SGL: Not Supported 00:09:18.703 SGL Metadata Address: Not Supported 00:09:18.703 SGL Offset: Not Supported 00:09:18.703 Transport SGL Data Block: Not Supported 00:09:18.703 Replay Protected Memory Block: Not Supported 00:09:18.703 00:09:18.703 Firmware Slot Information 00:09:18.703 ========================= 00:09:18.703 Active slot: 1 00:09:18.703 Slot 1 Firmware Revision: 1.0 00:09:18.703 00:09:18.703 00:09:18.703 Commands Supported and Effects 00:09:18.703 ============================== 00:09:18.703 Admin Commands 00:09:18.703 -------------- 00:09:18.703 Delete I/O Submission Queue (00h): Supported 00:09:18.703 Create I/O Submission Queue (01h): Supported 00:09:18.703 Get Log Page (02h): Supported 00:09:18.703 Delete I/O Completion Queue (04h): Supported 00:09:18.703 Create I/O Completion Queue (05h): Supported 00:09:18.703 Identify (06h): Supported 00:09:18.703 Abort (08h): Supported 00:09:18.703 Set Features (09h): Supported 00:09:18.703 Get Features (0Ah): Supported 00:09:18.703 Asynchronous Event Request (0Ch): Supported 00:09:18.703 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.703 Directive Send (19h): Supported 00:09:18.703 Directive Receive (1Ah): Supported 00:09:18.703 Virtualization Management (1Ch): Supported 00:09:18.703 Doorbell Buffer Config (7Ch): Supported 00:09:18.703 Format NVM (80h): Supported LBA-Change 00:09:18.703 I/O Commands 00:09:18.703 ------------ 00:09:18.703 Flush (00h): Supported LBA-Change 00:09:18.703 Write (01h): Supported LBA-Change 00:09:18.703 Read (02h): Supported 00:09:18.703 Compare (05h): Supported 00:09:18.703 Write Zeroes (08h): Supported LBA-Change 00:09:18.703 Dataset Management (09h): Supported LBA-Change 00:09:18.703 Unknown (0Ch): Supported 00:09:18.703 Unknown (12h): Supported 00:09:18.703 Copy (19h): Supported LBA-Change 00:09:18.703 Unknown (1Dh): Supported LBA-Change 00:09:18.703 00:09:18.703 Error Log 00:09:18.703 ========= 00:09:18.703 00:09:18.703 Arbitration 00:09:18.703 =========== 00:09:18.703 Arbitration Burst: no limit 00:09:18.703 00:09:18.703 Power Management 00:09:18.703 ================ 00:09:18.703 Number of Power States: 1 00:09:18.703 Current Power State: Power State #0 00:09:18.703 Power State #0: 00:09:18.703 Max Power: 25.00 W 00:09:18.703 Non-Operational State: Operational 00:09:18.703 Entry Latency: 16 microseconds 00:09:18.703 Exit Latency: 4 microseconds 00:09:18.703 Relative Read Throughput: 0 00:09:18.703 Relative Read Latency: 0 00:09:18.703 Relative Write Throughput: 0 00:09:18.703 Relative Write Latency: 0 00:09:18.703 Idle Power: Not Reported 00:09:18.703 Active Power: Not Reported 00:09:18.703 Non-Operational Permissive Mode: Not Supported 00:09:18.703 00:09:18.703 Health Information 00:09:18.703 ================== 00:09:18.703 Critical Warnings: 00:09:18.703 Available Spare Space: OK 00:09:18.703 Temperature: OK 00:09:18.703 Device Reliability: OK 00:09:18.703 Read Only: No 00:09:18.703 Volatile Memory Backup: OK 00:09:18.703 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.703 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.703 Available Spare: 0% 00:09:18.703 Available Spare Threshold: 0% 00:09:18.703 Life Percentage Used: 0% 00:09:18.703 Data Units Read: 694 00:09:18.703 Data Units Written: 622 00:09:18.703 Host Read Commands: 39277 00:09:18.703 Host Write Commands: 39063 00:09:18.703 Controller Busy Time: 0 minutes 00:09:18.703 Power Cycles: 0 00:09:18.703 Power On Hours: 0 hours 00:09:18.703 Unsafe Shutdowns: 0 00:09:18.703 Unrecoverable Media Errors: 0 00:09:18.703 Lifetime Error Log Entries: 0 00:09:18.703 Warning Temperature Time: 0 minutes 00:09:18.703 Critical Temperature Time: 0 minutes 00:09:18.703 00:09:18.703 Number of Queues 00:09:18.703 ================ 00:09:18.703 Number of I/O Submission Queues: 64 00:09:18.703 Number of I/O Completion Queues: 64 00:09:18.703 00:09:18.703 ZNS Specific Controller Data 00:09:18.703 ============================ 00:09:18.703 Zone Append Size Limit: 0 00:09:18.703 00:09:18.703 00:09:18.703 Active Namespaces 00:09:18.703 ================= 00:09:18.703 Namespace ID:1 00:09:18.703 Error Recovery Timeout: Unlimited 00:09:18.703 Command Set Identifier: NVM (00h) 00:09:18.703 Deallocate: Supported 00:09:18.703 Deallocated/Unwritten Error: Supported 00:09:18.703 Deallocated Read Value: All 0x00 00:09:18.703 Deallocate in Write Zeroes: Not Supported 00:09:18.703 Deallocated Guard Field: 0xFFFF 00:09:18.703 Flush: Supported 00:09:18.703 Reservation: Not Supported 00:09:18.703 Metadata Transferred as: Separate Metadata Buffer 00:09:18.703 Namespace Sharing Capabilities: Private 00:09:18.703 Size (in LBAs): 1548666 (5GiB) 00:09:18.703 Capacity (in LBAs): 1548666 (5GiB) 00:09:18.703 Utilization (in LBAs): 1548666 (5GiB) 00:09:18.703 Thin Provisioning: Not Supported 00:09:18.703 Per-NS Atomic Units: No 00:09:18.703 Maximum Single Source Range Length: 128 00:09:18.703 Maximum Copy Length: 128 00:09:18.703 Maximum Source Range Count: 128 00:09:18.704 NGUID/EUI64 Never Reused: No 00:09:18.704 Namespace Write Protected: No 00:09:18.704 Number of LBA Formats: 8 00:09:18.704 Current LBA Format: LBA Format #07 00:09:18.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.704 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.704 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.704 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.704 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.704 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.704 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.704 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.704 00:09:18.704 NVM Specific Namespace Data 00:09:18.704 =========================== 00:09:18.704 Logical Block Storage Tag Mask: 0 00:09:18.704 Protection Information Capabilities: 00:09:18.704 16b Guard Protection Information Storage Tag Support: No 00:09:18.704 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.704 Storage Tag Check Read Support: No 00:09:18.704 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.704 ===================================================== 00:09:18.704 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.704 ===================================================== 00:09:18.704 Controller Capabilities/Features 00:09:18.704 ================================ 00:09:18.704 Vendor ID: 1b36 00:09:18.704 Subsystem Vendor ID: 1af4 00:09:18.704 Serial Number: 12341 00:09:18.704 Model Number: QEMU NVMe Ctrl 00:09:18.704 Firmware Version: 8.0.0 00:09:18.704 Recommended Arb Burst: 6 00:09:18.704 IEEE OUI Identifier: 00 54 52 00:09:18.704 Multi-path I/O 00:09:18.704 May have multiple subsystem ports: No 00:09:18.704 May have multiple controllers: No 00:09:18.704 Associated with SR-IOV VF: No 00:09:18.704 Max Data Transfer Size: 524288 00:09:18.704 Max Number of Namespaces: 256 00:09:18.704 Max Number of I/O Queues: 64 00:09:18.704 NVMe Specification Version (VS): 1.4 00:09:18.704 NVMe Specification Version (Identify): 1.4 00:09:18.704 Maximum Queue Entries: 2048 00:09:18.704 Contiguous Queues Required: Yes 00:09:18.704 Arbitration Mechanisms Supported 00:09:18.704 Weighted Round Robin: Not Supported 00:09:18.704 Vendor Specific: Not Supported 00:09:18.704 Reset Timeout: 7500 ms 00:09:18.704 Doorbell Stride: 4 bytes 00:09:18.704 NVM Subsystem Reset: Not Supported 00:09:18.704 Command Sets Supported 00:09:18.704 NVM Command Set: Supported 00:09:18.704 Boot Partition: Not Supported 00:09:18.704 Memory Page Size Minimum: 4096 bytes 00:09:18.704 Memory Page Size Maximum: 65536 bytes 00:09:18.704 Persistent Memory Region: Not Supported 00:09:18.704 Optional Asynchronous Events Supported 00:09:18.704 Namespace Attribute Notices: Supported 00:09:18.704 Firmware Activation Notices: Not Supported 00:09:18.704 ANA Change Notices: Not Supported 00:09:18.704 PLE Aggregate Log Change Notices: Not Supported 00:09:18.704 LBA Status Info Alert Notices: Not Supported 00:09:18.704 EGE Aggregate Log Change Notices: Not Supported 00:09:18.704 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.704 Zone Descriptor Change Notices: Not Supported 00:09:18.704 Discovery Log Change Notices: Not Supported 00:09:18.704 Controller Attributes 00:09:18.704 128-bit Host Identifier: Not Supported 00:09:18.704 Non-Operational Permissive Mode: Not Supported 00:09:18.704 NVM Sets: Not Supported 00:09:18.704 Read Recovery Levels: Not Supported 00:09:18.704 Endurance Groups: Not Supported 00:09:18.704 Predictable Latency Mode: Not Supported 00:09:18.704 Traffic Based Keep ALive: Not Supported 00:09:18.704 Namespace Granularity: Not Supported 00:09:18.704 SQ Associations: Not Supported 00:09:18.704 UUID List: Not Supported 00:09:18.704 Multi-Domain Subsystem: Not Supported 00:09:18.704 Fixed Capacity Management: Not Supported 00:09:18.704 Variable Capacity Management: Not Supported 00:09:18.704 Delete Endurance Group: Not Supported 00:09:18.704 Delete NVM Set: Not Supported 00:09:18.704 Extended LBA Formats Supported: Supported 00:09:18.704 Flexible Data Placement Supported: Not Supported 00:09:18.704 00:09:18.704 Controller Memory Buffer Support 00:09:18.704 ================================ 00:09:18.704 Supported: No 00:09:18.704 00:09:18.704 Persistent Memory Region Support 00:09:18.704 ================================ 00:09:18.704 Supported: No 00:09:18.704 00:09:18.704 Admin Command Set Attributes 00:09:18.704 ============================ 00:09:18.704 Security Send/Receive: Not Supported 00:09:18.704 Format NVM: Supported 00:09:18.704 Firmware Activate/Download: Not Supported 00:09:18.704 Namespace Management: Supported 00:09:18.704 Device Self-Test: Not Supported 00:09:18.704 Directives: Supported 00:09:18.704 NVMe-MI: Not Supported 00:09:18.704 Virtualization Management: Not Supported 00:09:18.704 Doorbell Buffer Config: Supported 00:09:18.704 Get LBA Status Capability: Not Supported 00:09:18.704 Command & Feature Lockdown Capability: Not Supported 00:09:18.704 Abort Command Limit: 4 00:09:18.704 Async Event Request Limit: 4 00:09:18.704 Number of Firmware Slots: N/A 00:09:18.704 Firmware Slot 1 Read-Only: N/A 00:09:18.704 Firmware Activation Without Reset: N/A 00:09:18.704 Multiple Update Detection Support: N/A 00:09:18.704 Firmware Update Granularity: No Information Provided 00:09:18.704 Per-Namespace SMART Log: Yes 00:09:18.704 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.704 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:18.704 Command Effects Log Page: Supported 00:09:18.704 Get Log Page Extended Data: Supported 00:09:18.704 Telemetry Log Pages: Not Supported 00:09:18.704 Persistent Event Log Pages: Not Supported 00:09:18.704 Supported Log Pages Log Page: May Support 00:09:18.704 Commands Supported & Effects Log Page: Not Supported 00:09:18.704 Feature Identifiers & Effects Log Page:May Support 00:09:18.704 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.704 Data Area 4 for Telemetry Log: Not Supported 00:09:18.704 Error Log Page Entries Supported: 1 00:09:18.704 Keep Alive: Not Supported 00:09:18.704 00:09:18.704 NVM Command Set Attributes 00:09:18.704 ========================== 00:09:18.704 Submission Queue Entry Size 00:09:18.704 Max: 64 00:09:18.704 Min: 64 00:09:18.704 Completion Queue Entry Size 00:09:18.704 Max: 16 00:09:18.704 Min: 16 00:09:18.704 Number of Namespaces: 256 00:09:18.704 Compare Command: Supported 00:09:18.704 Write Uncorrectable Command: Not Supported 00:09:18.704 Dataset Management Command: Supported 00:09:18.704 Write Zeroes Command: Supported 00:09:18.704 Set Features Save Field: Supported 00:09:18.704 Reservations: Not Supported 00:09:18.704 Timestamp: Supported 00:09:18.704 Copy: Supported 00:09:18.704 Volatile Write Cache: Present 00:09:18.704 Atomic Write Unit (Normal): 1 00:09:18.704 Atomic Write Unit (PFail): 1 00:09:18.704 Atomic Compare & Write Unit: 1 00:09:18.704 Fused Compare & Write: Not Supported 00:09:18.704 Scatter-Gather List 00:09:18.704 SGL Command Set: Supported 00:09:18.704 SGL Keyed: Not Supported 00:09:18.704 SGL Bit Bucket Descriptor: Not Supported 00:09:18.704 SGL Metadata Pointer: Not Supported 00:09:18.704 Oversized SGL: Not Supported 00:09:18.704 SGL Metadata Address: Not Supported 00:09:18.704 SGL Offset: Not Supported 00:09:18.704 Transport SGL Data Block: Not Supported 00:09:18.704 Replay Protected Memory Block: Not Supported 00:09:18.704 00:09:18.704 Firmware Slot Information 00:09:18.704 ========================= 00:09:18.704 Active slot: 1 00:09:18.704 Slot 1 Firmware Revision: 1.0 00:09:18.704 00:09:18.704 00:09:18.704 Commands Supported and Effects 00:09:18.704 ============================== 00:09:18.704 Admin Commands 00:09:18.704 -------------- 00:09:18.704 Delete I/O Submission Queue (00h): Supported 00:09:18.704 Create I/O Submission Queue (01h): Supported 00:09:18.704 Get Log Page (02h): Supported 00:09:18.704 Delete I/O Completion Queue (04h): Supported 00:09:18.704 Create I/O Completion Queue (05h): Supported 00:09:18.704 Identify (06h): Supported 00:09:18.704 Abort (08h): Supported 00:09:18.704 Set Features (09h): Supported 00:09:18.704 Get Featu[2024-11-27 11:04:47.451420] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75091 terminated unexpected 00:09:18.704 [2024-11-27 11:04:47.453193] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75091 terminated unexpected 00:09:18.704 [2024-11-27 11:04:47.454031] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75091 terminated unexpected 00:09:18.704 res (0Ah): Supported 00:09:18.704 Asynchronous Event Request (0Ch): Supported 00:09:18.705 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.705 Directive Send (19h): Supported 00:09:18.705 Directive Receive (1Ah): Supported 00:09:18.705 Virtualization Management (1Ch): Supported 00:09:18.705 Doorbell Buffer Config (7Ch): Supported 00:09:18.705 Format NVM (80h): Supported LBA-Change 00:09:18.705 I/O Commands 00:09:18.705 ------------ 00:09:18.705 Flush (00h): Supported LBA-Change 00:09:18.705 Write (01h): Supported LBA-Change 00:09:18.705 Read (02h): Supported 00:09:18.705 Compare (05h): Supported 00:09:18.705 Write Zeroes (08h): Supported LBA-Change 00:09:18.705 Dataset Management (09h): Supported LBA-Change 00:09:18.705 Unknown (0Ch): Supported 00:09:18.705 Unknown (12h): Supported 00:09:18.705 Copy (19h): Supported LBA-Change 00:09:18.705 Unknown (1Dh): Supported LBA-Change 00:09:18.705 00:09:18.705 Error Log 00:09:18.705 ========= 00:09:18.705 00:09:18.705 Arbitration 00:09:18.705 =========== 00:09:18.705 Arbitration Burst: no limit 00:09:18.705 00:09:18.705 Power Management 00:09:18.705 ================ 00:09:18.705 Number of Power States: 1 00:09:18.705 Current Power State: Power State #0 00:09:18.705 Power State #0: 00:09:18.705 Max Power: 25.00 W 00:09:18.705 Non-Operational State: Operational 00:09:18.705 Entry Latency: 16 microseconds 00:09:18.705 Exit Latency: 4 microseconds 00:09:18.705 Relative Read Throughput: 0 00:09:18.705 Relative Read Latency: 0 00:09:18.705 Relative Write Throughput: 0 00:09:18.705 Relative Write Latency: 0 00:09:18.705 Idle Power: Not Reported 00:09:18.705 Active Power: Not Reported 00:09:18.705 Non-Operational Permissive Mode: Not Supported 00:09:18.705 00:09:18.705 Health Information 00:09:18.705 ================== 00:09:18.705 Critical Warnings: 00:09:18.705 Available Spare Space: OK 00:09:18.705 Temperature: OK 00:09:18.705 Device Reliability: OK 00:09:18.705 Read Only: No 00:09:18.705 Volatile Memory Backup: OK 00:09:18.705 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.705 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.705 Available Spare: 0% 00:09:18.705 Available Spare Threshold: 0% 00:09:18.705 Life Percentage Used: 0% 00:09:18.705 Data Units Read: 1053 00:09:18.705 Data Units Written: 919 00:09:18.705 Host Read Commands: 58141 00:09:18.705 Host Write Commands: 56933 00:09:18.705 Controller Busy Time: 0 minutes 00:09:18.705 Power Cycles: 0 00:09:18.705 Power On Hours: 0 hours 00:09:18.705 Unsafe Shutdowns: 0 00:09:18.705 Unrecoverable Media Errors: 0 00:09:18.705 Lifetime Error Log Entries: 0 00:09:18.705 Warning Temperature Time: 0 minutes 00:09:18.705 Critical Temperature Time: 0 minutes 00:09:18.705 00:09:18.705 Number of Queues 00:09:18.705 ================ 00:09:18.705 Number of I/O Submission Queues: 64 00:09:18.705 Number of I/O Completion Queues: 64 00:09:18.705 00:09:18.705 ZNS Specific Controller Data 00:09:18.705 ============================ 00:09:18.705 Zone Append Size Limit: 0 00:09:18.705 00:09:18.705 00:09:18.705 Active Namespaces 00:09:18.705 ================= 00:09:18.705 Namespace ID:1 00:09:18.705 Error Recovery Timeout: Unlimited 00:09:18.705 Command Set Identifier: NVM (00h) 00:09:18.705 Deallocate: Supported 00:09:18.705 Deallocated/Unwritten Error: Supported 00:09:18.705 Deallocated Read Value: All 0x00 00:09:18.705 Deallocate in Write Zeroes: Not Supported 00:09:18.705 Deallocated Guard Field: 0xFFFF 00:09:18.705 Flush: Supported 00:09:18.705 Reservation: Not Supported 00:09:18.705 Namespace Sharing Capabilities: Private 00:09:18.705 Size (in LBAs): 1310720 (5GiB) 00:09:18.705 Capacity (in LBAs): 1310720 (5GiB) 00:09:18.705 Utilization (in LBAs): 1310720 (5GiB) 00:09:18.705 Thin Provisioning: Not Supported 00:09:18.705 Per-NS Atomic Units: No 00:09:18.705 Maximum Single Source Range Length: 128 00:09:18.705 Maximum Copy Length: 128 00:09:18.705 Maximum Source Range Count: 128 00:09:18.705 NGUID/EUI64 Never Reused: No 00:09:18.705 Namespace Write Protected: No 00:09:18.705 Number of LBA Formats: 8 00:09:18.705 Current LBA Format: LBA Format #04 00:09:18.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.705 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.705 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.705 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.705 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.705 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.705 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.705 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.705 00:09:18.705 NVM Specific Namespace Data 00:09:18.705 =========================== 00:09:18.705 Logical Block Storage Tag Mask: 0 00:09:18.705 Protection Information Capabilities: 00:09:18.705 16b Guard Protection Information Storage Tag Support: No 00:09:18.705 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.705 Storage Tag Check Read Support: No 00:09:18.705 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.705 ===================================================== 00:09:18.705 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.705 ===================================================== 00:09:18.705 Controller Capabilities/Features 00:09:18.705 ================================ 00:09:18.705 Vendor ID: 1b36 00:09:18.705 Subsystem Vendor ID: 1af4 00:09:18.705 Serial Number: 12342 00:09:18.705 Model Number: QEMU NVMe Ctrl 00:09:18.705 Firmware Version: 8.0.0 00:09:18.705 Recommended Arb Burst: 6 00:09:18.705 IEEE OUI Identifier: 00 54 52 00:09:18.705 Multi-path I/O 00:09:18.705 May have multiple subsystem ports: No 00:09:18.705 May have multiple controllers: No 00:09:18.705 Associated with SR-IOV VF: No 00:09:18.705 Max Data Transfer Size: 524288 00:09:18.705 Max Number of Namespaces: 256 00:09:18.705 Max Number of I/O Queues: 64 00:09:18.705 NVMe Specification Version (VS): 1.4 00:09:18.705 NVMe Specification Version (Identify): 1.4 00:09:18.705 Maximum Queue Entries: 2048 00:09:18.705 Contiguous Queues Required: Yes 00:09:18.705 Arbitration Mechanisms Supported 00:09:18.705 Weighted Round Robin: Not Supported 00:09:18.705 Vendor Specific: Not Supported 00:09:18.705 Reset Timeout: 7500 ms 00:09:18.705 Doorbell Stride: 4 bytes 00:09:18.705 NVM Subsystem Reset: Not Supported 00:09:18.705 Command Sets Supported 00:09:18.705 NVM Command Set: Supported 00:09:18.705 Boot Partition: Not Supported 00:09:18.705 Memory Page Size Minimum: 4096 bytes 00:09:18.705 Memory Page Size Maximum: 65536 bytes 00:09:18.705 Persistent Memory Region: Not Supported 00:09:18.705 Optional Asynchronous Events Supported 00:09:18.705 Namespace Attribute Notices: Supported 00:09:18.705 Firmware Activation Notices: Not Supported 00:09:18.705 ANA Change Notices: Not Supported 00:09:18.705 PLE Aggregate Log Change Notices: Not Supported 00:09:18.705 LBA Status Info Alert Notices: Not Supported 00:09:18.705 EGE Aggregate Log Change Notices: Not Supported 00:09:18.705 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.705 Zone Descriptor Change Notices: Not Supported 00:09:18.706 Discovery Log Change Notices: Not Supported 00:09:18.706 Controller Attributes 00:09:18.706 128-bit Host Identifier: Not Supported 00:09:18.706 Non-Operational Permissive Mode: Not Supported 00:09:18.706 NVM Sets: Not Supported 00:09:18.706 Read Recovery Levels: Not Supported 00:09:18.706 Endurance Groups: Not Supported 00:09:18.706 Predictable Latency Mode: Not Supported 00:09:18.706 Traffic Based Keep ALive: Not Supported 00:09:18.706 Namespace Granularity: Not Supported 00:09:18.706 SQ Associations: Not Supported 00:09:18.706 UUID List: Not Supported 00:09:18.706 Multi-Domain Subsystem: Not Supported 00:09:18.706 Fixed Capacity Management: Not Supported 00:09:18.706 Variable Capacity Management: Not Supported 00:09:18.706 Delete Endurance Group: Not Supported 00:09:18.706 Delete NVM Set: Not Supported 00:09:18.706 Extended LBA Formats Supported: Supported 00:09:18.706 Flexible Data Placement Supported: Not Supported 00:09:18.706 00:09:18.706 Controller Memory Buffer Support 00:09:18.706 ================================ 00:09:18.706 Supported: No 00:09:18.706 00:09:18.706 Persistent Memory Region Support 00:09:18.706 ================================ 00:09:18.706 Supported: No 00:09:18.706 00:09:18.706 Admin Command Set Attributes 00:09:18.706 ============================ 00:09:18.706 Security Send/Receive: Not Supported 00:09:18.706 Format NVM: Supported 00:09:18.706 Firmware Activate/Download: Not Supported 00:09:18.706 Namespace Management: Supported 00:09:18.706 Device Self-Test: Not Supported 00:09:18.706 Directives: Supported 00:09:18.706 NVMe-MI: Not Supported 00:09:18.706 Virtualization Management: Not Supported 00:09:18.706 Doorbell Buffer Config: Supported 00:09:18.706 Get LBA Status Capability: Not Supported 00:09:18.706 Command & Feature Lockdown Capability: Not Supported 00:09:18.706 Abort Command Limit: 4 00:09:18.706 Async Event Request Limit: 4 00:09:18.706 Number of Firmware Slots: N/A 00:09:18.706 Firmware Slot 1 Read-Only: N/A 00:09:18.706 Firmware Activation Without Reset: N/A 00:09:18.706 Multiple Update Detection Support: N/A 00:09:18.706 Firmware Update Granularity: No Information Provided 00:09:18.706 Per-Namespace SMART Log: Yes 00:09:18.706 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.706 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:18.706 Command Effects Log Page: Supported 00:09:18.706 Get Log Page Extended Data: Supported 00:09:18.706 Telemetry Log Pages: Not Supported 00:09:18.706 Persistent Event Log Pages: Not Supported 00:09:18.706 Supported Log Pages Log Page: May Support 00:09:18.706 Commands Supported & Effects Log Page: Not Supported 00:09:18.706 Feature Identifiers & Effects Log Page:May Support 00:09:18.706 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.706 Data Area 4 for Telemetry Log: Not Supported 00:09:18.706 Error Log Page Entries Supported: 1 00:09:18.706 Keep Alive: Not Supported 00:09:18.706 00:09:18.706 NVM Command Set Attributes 00:09:18.706 ========================== 00:09:18.706 Submission Queue Entry Size 00:09:18.706 Max: 64 00:09:18.706 Min: 64 00:09:18.706 Completion Queue Entry Size 00:09:18.706 Max: 16 00:09:18.706 Min: 16 00:09:18.706 Number of Namespaces: 256 00:09:18.706 Compare Command: Supported 00:09:18.706 Write Uncorrectable Command: Not Supported 00:09:18.706 Dataset Management Command: Supported 00:09:18.706 Write Zeroes Command: Supported 00:09:18.706 Set Features Save Field: Supported 00:09:18.706 Reservations: Not Supported 00:09:18.706 Timestamp: Supported 00:09:18.706 Copy: Supported 00:09:18.706 Volatile Write Cache: Present 00:09:18.706 Atomic Write Unit (Normal): 1 00:09:18.706 Atomic Write Unit (PFail): 1 00:09:18.706 Atomic Compare & Write Unit: 1 00:09:18.706 Fused Compare & Write: Not Supported 00:09:18.706 Scatter-Gather List 00:09:18.706 SGL Command Set: Supported 00:09:18.706 SGL Keyed: Not Supported 00:09:18.706 SGL Bit Bucket Descriptor: Not Supported 00:09:18.706 SGL Metadata Pointer: Not Supported 00:09:18.706 Oversized SGL: Not Supported 00:09:18.706 SGL Metadata Address: Not Supported 00:09:18.706 SGL Offset: Not Supported 00:09:18.706 Transport SGL Data Block: Not Supported 00:09:18.706 Replay Protected Memory Block: Not Supported 00:09:18.706 00:09:18.706 Firmware Slot Information 00:09:18.706 ========================= 00:09:18.706 Active slot: 1 00:09:18.706 Slot 1 Firmware Revision: 1.0 00:09:18.706 00:09:18.706 00:09:18.706 Commands Supported and Effects 00:09:18.706 ============================== 00:09:18.706 Admin Commands 00:09:18.706 -------------- 00:09:18.706 Delete I/O Submission Queue (00h): Supported 00:09:18.706 Create I/O Submission Queue (01h): Supported 00:09:18.706 Get Log Page (02h): Supported 00:09:18.706 Delete I/O Completion Queue (04h): Supported 00:09:18.706 Create I/O Completion Queue (05h): Supported 00:09:18.706 Identify (06h): Supported 00:09:18.706 Abort (08h): Supported 00:09:18.706 Set Features (09h): Supported 00:09:18.706 Get Features (0Ah): Supported 00:09:18.706 Asynchronous Event Request (0Ch): Supported 00:09:18.706 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.706 Directive Send (19h): Supported 00:09:18.706 Directive Receive (1Ah): Supported 00:09:18.706 Virtualization Management (1Ch): Supported 00:09:18.706 Doorbel[2024-11-27 11:04:47.455488] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75091 terminated unexpected 00:09:18.706 l Buffer Config (7Ch): Supported 00:09:18.706 Format NVM (80h): Supported LBA-Change 00:09:18.706 I/O Commands 00:09:18.706 ------------ 00:09:18.706 Flush (00h): Supported LBA-Change 00:09:18.706 Write (01h): Supported LBA-Change 00:09:18.706 Read (02h): Supported 00:09:18.706 Compare (05h): Supported 00:09:18.706 Write Zeroes (08h): Supported LBA-Change 00:09:18.706 Dataset Management (09h): Supported LBA-Change 00:09:18.706 Unknown (0Ch): Supported 00:09:18.706 Unknown (12h): Supported 00:09:18.706 Copy (19h): Supported LBA-Change 00:09:18.706 Unknown (1Dh): Supported LBA-Change 00:09:18.706 00:09:18.706 Error Log 00:09:18.706 ========= 00:09:18.706 00:09:18.706 Arbitration 00:09:18.706 =========== 00:09:18.706 Arbitration Burst: no limit 00:09:18.706 00:09:18.706 Power Management 00:09:18.706 ================ 00:09:18.706 Number of Power States: 1 00:09:18.706 Current Power State: Power State #0 00:09:18.706 Power State #0: 00:09:18.706 Max Power: 25.00 W 00:09:18.706 Non-Operational State: Operational 00:09:18.706 Entry Latency: 16 microseconds 00:09:18.706 Exit Latency: 4 microseconds 00:09:18.706 Relative Read Throughput: 0 00:09:18.706 Relative Read Latency: 0 00:09:18.706 Relative Write Throughput: 0 00:09:18.706 Relative Write Latency: 0 00:09:18.706 Idle Power: Not Reported 00:09:18.706 Active Power: Not Reported 00:09:18.706 Non-Operational Permissive Mode: Not Supported 00:09:18.706 00:09:18.706 Health Information 00:09:18.706 ================== 00:09:18.706 Critical Warnings: 00:09:18.706 Available Spare Space: OK 00:09:18.706 Temperature: OK 00:09:18.706 Device Reliability: OK 00:09:18.706 Read Only: No 00:09:18.706 Volatile Memory Backup: OK 00:09:18.706 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.706 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.706 Available Spare: 0% 00:09:18.706 Available Spare Threshold: 0% 00:09:18.706 Life Percentage Used: 0% 00:09:18.706 Data Units Read: 2210 00:09:18.706 Data Units Written: 1997 00:09:18.706 Host Read Commands: 119865 00:09:18.706 Host Write Commands: 118134 00:09:18.706 Controller Busy Time: 0 minutes 00:09:18.706 Power Cycles: 0 00:09:18.706 Power On Hours: 0 hours 00:09:18.706 Unsafe Shutdowns: 0 00:09:18.706 Unrecoverable Media Errors: 0 00:09:18.706 Lifetime Error Log Entries: 0 00:09:18.706 Warning Temperature Time: 0 minutes 00:09:18.706 Critical Temperature Time: 0 minutes 00:09:18.706 00:09:18.706 Number of Queues 00:09:18.706 ================ 00:09:18.706 Number of I/O Submission Queues: 64 00:09:18.706 Number of I/O Completion Queues: 64 00:09:18.706 00:09:18.706 ZNS Specific Controller Data 00:09:18.706 ============================ 00:09:18.706 Zone Append Size Limit: 0 00:09:18.706 00:09:18.706 00:09:18.706 Active Namespaces 00:09:18.706 ================= 00:09:18.706 Namespace ID:1 00:09:18.706 Error Recovery Timeout: Unlimited 00:09:18.706 Command Set Identifier: NVM (00h) 00:09:18.706 Deallocate: Supported 00:09:18.706 Deallocated/Unwritten Error: Supported 00:09:18.706 Deallocated Read Value: All 0x00 00:09:18.706 Deallocate in Write Zeroes: Not Supported 00:09:18.706 Deallocated Guard Field: 0xFFFF 00:09:18.706 Flush: Supported 00:09:18.706 Reservation: Not Supported 00:09:18.706 Namespace Sharing Capabilities: Private 00:09:18.706 Size (in LBAs): 1048576 (4GiB) 00:09:18.707 Capacity (in LBAs): 1048576 (4GiB) 00:09:18.707 Utilization (in LBAs): 1048576 (4GiB) 00:09:18.707 Thin Provisioning: Not Supported 00:09:18.707 Per-NS Atomic Units: No 00:09:18.707 Maximum Single Source Range Length: 128 00:09:18.707 Maximum Copy Length: 128 00:09:18.707 Maximum Source Range Count: 128 00:09:18.707 NGUID/EUI64 Never Reused: No 00:09:18.707 Namespace Write Protected: No 00:09:18.707 Number of LBA Formats: 8 00:09:18.707 Current LBA Format: LBA Format #04 00:09:18.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.707 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.707 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.707 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.707 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.707 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.707 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.707 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.707 00:09:18.707 NVM Specific Namespace Data 00:09:18.707 =========================== 00:09:18.707 Logical Block Storage Tag Mask: 0 00:09:18.707 Protection Information Capabilities: 00:09:18.707 16b Guard Protection Information Storage Tag Support: No 00:09:18.707 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.707 Storage Tag Check Read Support: No 00:09:18.707 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Namespace ID:2 00:09:18.707 Error Recovery Timeout: Unlimited 00:09:18.707 Command Set Identifier: NVM (00h) 00:09:18.707 Deallocate: Supported 00:09:18.707 Deallocated/Unwritten Error: Supported 00:09:18.707 Deallocated Read Value: All 0x00 00:09:18.707 Deallocate in Write Zeroes: Not Supported 00:09:18.707 Deallocated Guard Field: 0xFFFF 00:09:18.707 Flush: Supported 00:09:18.707 Reservation: Not Supported 00:09:18.707 Namespace Sharing Capabilities: Private 00:09:18.707 Size (in LBAs): 1048576 (4GiB) 00:09:18.707 Capacity (in LBAs): 1048576 (4GiB) 00:09:18.707 Utilization (in LBAs): 1048576 (4GiB) 00:09:18.707 Thin Provisioning: Not Supported 00:09:18.707 Per-NS Atomic Units: No 00:09:18.707 Maximum Single Source Range Length: 128 00:09:18.707 Maximum Copy Length: 128 00:09:18.707 Maximum Source Range Count: 128 00:09:18.707 NGUID/EUI64 Never Reused: No 00:09:18.707 Namespace Write Protected: No 00:09:18.707 Number of LBA Formats: 8 00:09:18.707 Current LBA Format: LBA Format #04 00:09:18.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.707 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.707 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.707 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.707 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.707 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.707 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.707 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.707 00:09:18.707 NVM Specific Namespace Data 00:09:18.707 =========================== 00:09:18.707 Logical Block Storage Tag Mask: 0 00:09:18.707 Protection Information Capabilities: 00:09:18.707 16b Guard Protection Information Storage Tag Support: No 00:09:18.707 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.707 Storage Tag Check Read Support: No 00:09:18.707 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Namespace ID:3 00:09:18.707 Error Recovery Timeout: Unlimited 00:09:18.707 Command Set Identifier: NVM (00h) 00:09:18.707 Deallocate: Supported 00:09:18.707 Deallocated/Unwritten Error: Supported 00:09:18.707 Deallocated Read Value: All 0x00 00:09:18.707 Deallocate in Write Zeroes: Not Supported 00:09:18.707 Deallocated Guard Field: 0xFFFF 00:09:18.707 Flush: Supported 00:09:18.707 Reservation: Not Supported 00:09:18.707 Namespace Sharing Capabilities: Private 00:09:18.707 Size (in LBAs): 1048576 (4GiB) 00:09:18.707 Capacity (in LBAs): 1048576 (4GiB) 00:09:18.707 Utilization (in LBAs): 1048576 (4GiB) 00:09:18.707 Thin Provisioning: Not Supported 00:09:18.707 Per-NS Atomic Units: No 00:09:18.707 Maximum Single Source Range Length: 128 00:09:18.707 Maximum Copy Length: 128 00:09:18.707 Maximum Source Range Count: 128 00:09:18.707 NGUID/EUI64 Never Reused: No 00:09:18.707 Namespace Write Protected: No 00:09:18.707 Number of LBA Formats: 8 00:09:18.707 Current LBA Format: LBA Format #04 00:09:18.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.707 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.707 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.707 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.707 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.707 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.707 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.707 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.707 00:09:18.707 NVM Specific Namespace Data 00:09:18.707 =========================== 00:09:18.707 Logical Block Storage Tag Mask: 0 00:09:18.707 Protection Information Capabilities: 00:09:18.707 16b Guard Protection Information Storage Tag Support: No 00:09:18.707 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.707 Storage Tag Check Read Support: No 00:09:18.707 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.707 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:18.707 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:18.968 ===================================================== 00:09:18.968 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.968 ===================================================== 00:09:18.968 Controller Capabilities/Features 00:09:18.968 ================================ 00:09:18.968 Vendor ID: 1b36 00:09:18.969 Subsystem Vendor ID: 1af4 00:09:18.969 Serial Number: 12340 00:09:18.969 Model Number: QEMU NVMe Ctrl 00:09:18.969 Firmware Version: 8.0.0 00:09:18.969 Recommended Arb Burst: 6 00:09:18.969 IEEE OUI Identifier: 00 54 52 00:09:18.969 Multi-path I/O 00:09:18.969 May have multiple subsystem ports: No 00:09:18.969 May have multiple controllers: No 00:09:18.969 Associated with SR-IOV VF: No 00:09:18.969 Max Data Transfer Size: 524288 00:09:18.969 Max Number of Namespaces: 256 00:09:18.969 Max Number of I/O Queues: 64 00:09:18.969 NVMe Specification Version (VS): 1.4 00:09:18.969 NVMe Specification Version (Identify): 1.4 00:09:18.969 Maximum Queue Entries: 2048 00:09:18.969 Contiguous Queues Required: Yes 00:09:18.969 Arbitration Mechanisms Supported 00:09:18.969 Weighted Round Robin: Not Supported 00:09:18.969 Vendor Specific: Not Supported 00:09:18.969 Reset Timeout: 7500 ms 00:09:18.969 Doorbell Stride: 4 bytes 00:09:18.969 NVM Subsystem Reset: Not Supported 00:09:18.969 Command Sets Supported 00:09:18.969 NVM Command Set: Supported 00:09:18.969 Boot Partition: Not Supported 00:09:18.969 Memory Page Size Minimum: 4096 bytes 00:09:18.969 Memory Page Size Maximum: 65536 bytes 00:09:18.969 Persistent Memory Region: Not Supported 00:09:18.969 Optional Asynchronous Events Supported 00:09:18.969 Namespace Attribute Notices: Supported 00:09:18.969 Firmware Activation Notices: Not Supported 00:09:18.969 ANA Change Notices: Not Supported 00:09:18.969 PLE Aggregate Log Change Notices: Not Supported 00:09:18.969 LBA Status Info Alert Notices: Not Supported 00:09:18.969 EGE Aggregate Log Change Notices: Not Supported 00:09:18.969 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.969 Zone Descriptor Change Notices: Not Supported 00:09:18.969 Discovery Log Change Notices: Not Supported 00:09:18.969 Controller Attributes 00:09:18.969 128-bit Host Identifier: Not Supported 00:09:18.969 Non-Operational Permissive Mode: Not Supported 00:09:18.969 NVM Sets: Not Supported 00:09:18.969 Read Recovery Levels: Not Supported 00:09:18.969 Endurance Groups: Not Supported 00:09:18.969 Predictable Latency Mode: Not Supported 00:09:18.969 Traffic Based Keep ALive: Not Supported 00:09:18.969 Namespace Granularity: Not Supported 00:09:18.969 SQ Associations: Not Supported 00:09:18.969 UUID List: Not Supported 00:09:18.969 Multi-Domain Subsystem: Not Supported 00:09:18.969 Fixed Capacity Management: Not Supported 00:09:18.969 Variable Capacity Management: Not Supported 00:09:18.969 Delete Endurance Group: Not Supported 00:09:18.969 Delete NVM Set: Not Supported 00:09:18.969 Extended LBA Formats Supported: Supported 00:09:18.969 Flexible Data Placement Supported: Not Supported 00:09:18.969 00:09:18.969 Controller Memory Buffer Support 00:09:18.969 ================================ 00:09:18.969 Supported: No 00:09:18.969 00:09:18.969 Persistent Memory Region Support 00:09:18.969 ================================ 00:09:18.969 Supported: No 00:09:18.969 00:09:18.969 Admin Command Set Attributes 00:09:18.969 ============================ 00:09:18.969 Security Send/Receive: Not Supported 00:09:18.969 Format NVM: Supported 00:09:18.969 Firmware Activate/Download: Not Supported 00:09:18.969 Namespace Management: Supported 00:09:18.969 Device Self-Test: Not Supported 00:09:18.969 Directives: Supported 00:09:18.969 NVMe-MI: Not Supported 00:09:18.969 Virtualization Management: Not Supported 00:09:18.969 Doorbell Buffer Config: Supported 00:09:18.969 Get LBA Status Capability: Not Supported 00:09:18.969 Command & Feature Lockdown Capability: Not Supported 00:09:18.969 Abort Command Limit: 4 00:09:18.969 Async Event Request Limit: 4 00:09:18.969 Number of Firmware Slots: N/A 00:09:18.969 Firmware Slot 1 Read-Only: N/A 00:09:18.969 Firmware Activation Without Reset: N/A 00:09:18.969 Multiple Update Detection Support: N/A 00:09:18.969 Firmware Update Granularity: No Information Provided 00:09:18.969 Per-Namespace SMART Log: Yes 00:09:18.969 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.969 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:18.969 Command Effects Log Page: Supported 00:09:18.969 Get Log Page Extended Data: Supported 00:09:18.969 Telemetry Log Pages: Not Supported 00:09:18.969 Persistent Event Log Pages: Not Supported 00:09:18.969 Supported Log Pages Log Page: May Support 00:09:18.969 Commands Supported & Effects Log Page: Not Supported 00:09:18.969 Feature Identifiers & Effects Log Page:May Support 00:09:18.969 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.969 Data Area 4 for Telemetry Log: Not Supported 00:09:18.969 Error Log Page Entries Supported: 1 00:09:18.969 Keep Alive: Not Supported 00:09:18.969 00:09:18.969 NVM Command Set Attributes 00:09:18.969 ========================== 00:09:18.969 Submission Queue Entry Size 00:09:18.969 Max: 64 00:09:18.969 Min: 64 00:09:18.969 Completion Queue Entry Size 00:09:18.969 Max: 16 00:09:18.969 Min: 16 00:09:18.969 Number of Namespaces: 256 00:09:18.969 Compare Command: Supported 00:09:18.969 Write Uncorrectable Command: Not Supported 00:09:18.969 Dataset Management Command: Supported 00:09:18.969 Write Zeroes Command: Supported 00:09:18.969 Set Features Save Field: Supported 00:09:18.969 Reservations: Not Supported 00:09:18.969 Timestamp: Supported 00:09:18.969 Copy: Supported 00:09:18.969 Volatile Write Cache: Present 00:09:18.969 Atomic Write Unit (Normal): 1 00:09:18.969 Atomic Write Unit (PFail): 1 00:09:18.969 Atomic Compare & Write Unit: 1 00:09:18.969 Fused Compare & Write: Not Supported 00:09:18.969 Scatter-Gather List 00:09:18.969 SGL Command Set: Supported 00:09:18.969 SGL Keyed: Not Supported 00:09:18.969 SGL Bit Bucket Descriptor: Not Supported 00:09:18.969 SGL Metadata Pointer: Not Supported 00:09:18.969 Oversized SGL: Not Supported 00:09:18.969 SGL Metadata Address: Not Supported 00:09:18.969 SGL Offset: Not Supported 00:09:18.969 Transport SGL Data Block: Not Supported 00:09:18.969 Replay Protected Memory Block: Not Supported 00:09:18.969 00:09:18.969 Firmware Slot Information 00:09:18.969 ========================= 00:09:18.969 Active slot: 1 00:09:18.969 Slot 1 Firmware Revision: 1.0 00:09:18.969 00:09:18.969 00:09:18.969 Commands Supported and Effects 00:09:18.969 ============================== 00:09:18.969 Admin Commands 00:09:18.969 -------------- 00:09:18.969 Delete I/O Submission Queue (00h): Supported 00:09:18.969 Create I/O Submission Queue (01h): Supported 00:09:18.969 Get Log Page (02h): Supported 00:09:18.969 Delete I/O Completion Queue (04h): Supported 00:09:18.969 Create I/O Completion Queue (05h): Supported 00:09:18.969 Identify (06h): Supported 00:09:18.969 Abort (08h): Supported 00:09:18.969 Set Features (09h): Supported 00:09:18.969 Get Features (0Ah): Supported 00:09:18.969 Asynchronous Event Request (0Ch): Supported 00:09:18.969 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.969 Directive Send (19h): Supported 00:09:18.969 Directive Receive (1Ah): Supported 00:09:18.969 Virtualization Management (1Ch): Supported 00:09:18.969 Doorbell Buffer Config (7Ch): Supported 00:09:18.969 Format NVM (80h): Supported LBA-Change 00:09:18.969 I/O Commands 00:09:18.969 ------------ 00:09:18.969 Flush (00h): Supported LBA-Change 00:09:18.969 Write (01h): Supported LBA-Change 00:09:18.969 Read (02h): Supported 00:09:18.969 Compare (05h): Supported 00:09:18.969 Write Zeroes (08h): Supported LBA-Change 00:09:18.969 Dataset Management (09h): Supported LBA-Change 00:09:18.969 Unknown (0Ch): Supported 00:09:18.969 Unknown (12h): Supported 00:09:18.969 Copy (19h): Supported LBA-Change 00:09:18.969 Unknown (1Dh): Supported LBA-Change 00:09:18.969 00:09:18.969 Error Log 00:09:18.969 ========= 00:09:18.969 00:09:18.969 Arbitration 00:09:18.969 =========== 00:09:18.969 Arbitration Burst: no limit 00:09:18.969 00:09:18.969 Power Management 00:09:18.969 ================ 00:09:18.969 Number of Power States: 1 00:09:18.969 Current Power State: Power State #0 00:09:18.969 Power State #0: 00:09:18.969 Max Power: 25.00 W 00:09:18.969 Non-Operational State: Operational 00:09:18.969 Entry Latency: 16 microseconds 00:09:18.969 Exit Latency: 4 microseconds 00:09:18.969 Relative Read Throughput: 0 00:09:18.969 Relative Read Latency: 0 00:09:18.969 Relative Write Throughput: 0 00:09:18.969 Relative Write Latency: 0 00:09:18.969 Idle Power: Not Reported 00:09:18.969 Active Power: Not Reported 00:09:18.969 Non-Operational Permissive Mode: Not Supported 00:09:18.969 00:09:18.969 Health Information 00:09:18.969 ================== 00:09:18.969 Critical Warnings: 00:09:18.969 Available Spare Space: OK 00:09:18.969 Temperature: OK 00:09:18.969 Device Reliability: OK 00:09:18.970 Read Only: No 00:09:18.970 Volatile Memory Backup: OK 00:09:18.970 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.970 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.970 Available Spare: 0% 00:09:18.970 Available Spare Threshold: 0% 00:09:18.970 Life Percentage Used: 0% 00:09:18.970 Data Units Read: 694 00:09:18.970 Data Units Written: 622 00:09:18.970 Host Read Commands: 39277 00:09:18.970 Host Write Commands: 39063 00:09:18.970 Controller Busy Time: 0 minutes 00:09:18.970 Power Cycles: 0 00:09:18.970 Power On Hours: 0 hours 00:09:18.970 Unsafe Shutdowns: 0 00:09:18.970 Unrecoverable Media Errors: 0 00:09:18.970 Lifetime Error Log Entries: 0 00:09:18.970 Warning Temperature Time: 0 minutes 00:09:18.970 Critical Temperature Time: 0 minutes 00:09:18.970 00:09:18.970 Number of Queues 00:09:18.970 ================ 00:09:18.970 Number of I/O Submission Queues: 64 00:09:18.970 Number of I/O Completion Queues: 64 00:09:18.970 00:09:18.970 ZNS Specific Controller Data 00:09:18.970 ============================ 00:09:18.970 Zone Append Size Limit: 0 00:09:18.970 00:09:18.970 00:09:18.970 Active Namespaces 00:09:18.970 ================= 00:09:18.970 Namespace ID:1 00:09:18.970 Error Recovery Timeout: Unlimited 00:09:18.970 Command Set Identifier: NVM (00h) 00:09:18.970 Deallocate: Supported 00:09:18.970 Deallocated/Unwritten Error: Supported 00:09:18.970 Deallocated Read Value: All 0x00 00:09:18.970 Deallocate in Write Zeroes: Not Supported 00:09:18.970 Deallocated Guard Field: 0xFFFF 00:09:18.970 Flush: Supported 00:09:18.970 Reservation: Not Supported 00:09:18.970 Metadata Transferred as: Separate Metadata Buffer 00:09:18.970 Namespace Sharing Capabilities: Private 00:09:18.970 Size (in LBAs): 1548666 (5GiB) 00:09:18.970 Capacity (in LBAs): 1548666 (5GiB) 00:09:18.970 Utilization (in LBAs): 1548666 (5GiB) 00:09:18.970 Thin Provisioning: Not Supported 00:09:18.970 Per-NS Atomic Units: No 00:09:18.970 Maximum Single Source Range Length: 128 00:09:18.970 Maximum Copy Length: 128 00:09:18.970 Maximum Source Range Count: 128 00:09:18.970 NGUID/EUI64 Never Reused: No 00:09:18.970 Namespace Write Protected: No 00:09:18.970 Number of LBA Formats: 8 00:09:18.970 Current LBA Format: LBA Format #07 00:09:18.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.970 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.970 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.970 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.970 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.970 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.970 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.970 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.970 00:09:18.970 NVM Specific Namespace Data 00:09:18.970 =========================== 00:09:18.970 Logical Block Storage Tag Mask: 0 00:09:18.970 Protection Information Capabilities: 00:09:18.970 16b Guard Protection Information Storage Tag Support: No 00:09:18.970 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.970 Storage Tag Check Read Support: No 00:09:18.970 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.970 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:18.970 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:18.970 ===================================================== 00:09:18.970 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.970 ===================================================== 00:09:18.970 Controller Capabilities/Features 00:09:18.970 ================================ 00:09:18.970 Vendor ID: 1b36 00:09:18.970 Subsystem Vendor ID: 1af4 00:09:18.970 Serial Number: 12341 00:09:18.970 Model Number: QEMU NVMe Ctrl 00:09:18.970 Firmware Version: 8.0.0 00:09:18.970 Recommended Arb Burst: 6 00:09:18.970 IEEE OUI Identifier: 00 54 52 00:09:18.970 Multi-path I/O 00:09:18.970 May have multiple subsystem ports: No 00:09:18.970 May have multiple controllers: No 00:09:18.970 Associated with SR-IOV VF: No 00:09:18.970 Max Data Transfer Size: 524288 00:09:18.970 Max Number of Namespaces: 256 00:09:18.970 Max Number of I/O Queues: 64 00:09:18.970 NVMe Specification Version (VS): 1.4 00:09:18.970 NVMe Specification Version (Identify): 1.4 00:09:18.970 Maximum Queue Entries: 2048 00:09:18.970 Contiguous Queues Required: Yes 00:09:18.970 Arbitration Mechanisms Supported 00:09:18.970 Weighted Round Robin: Not Supported 00:09:18.970 Vendor Specific: Not Supported 00:09:18.970 Reset Timeout: 7500 ms 00:09:18.970 Doorbell Stride: 4 bytes 00:09:18.970 NVM Subsystem Reset: Not Supported 00:09:18.970 Command Sets Supported 00:09:18.970 NVM Command Set: Supported 00:09:18.970 Boot Partition: Not Supported 00:09:18.970 Memory Page Size Minimum: 4096 bytes 00:09:18.970 Memory Page Size Maximum: 65536 bytes 00:09:18.970 Persistent Memory Region: Not Supported 00:09:18.970 Optional Asynchronous Events Supported 00:09:18.970 Namespace Attribute Notices: Supported 00:09:18.970 Firmware Activation Notices: Not Supported 00:09:18.970 ANA Change Notices: Not Supported 00:09:18.970 PLE Aggregate Log Change Notices: Not Supported 00:09:18.970 LBA Status Info Alert Notices: Not Supported 00:09:18.970 EGE Aggregate Log Change Notices: Not Supported 00:09:18.970 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.970 Zone Descriptor Change Notices: Not Supported 00:09:18.970 Discovery Log Change Notices: Not Supported 00:09:18.970 Controller Attributes 00:09:18.970 128-bit Host Identifier: Not Supported 00:09:18.970 Non-Operational Permissive Mode: Not Supported 00:09:18.970 NVM Sets: Not Supported 00:09:18.970 Read Recovery Levels: Not Supported 00:09:18.970 Endurance Groups: Not Supported 00:09:18.970 Predictable Latency Mode: Not Supported 00:09:18.970 Traffic Based Keep ALive: Not Supported 00:09:18.970 Namespace Granularity: Not Supported 00:09:18.970 SQ Associations: Not Supported 00:09:18.970 UUID List: Not Supported 00:09:18.970 Multi-Domain Subsystem: Not Supported 00:09:18.970 Fixed Capacity Management: Not Supported 00:09:18.970 Variable Capacity Management: Not Supported 00:09:18.970 Delete Endurance Group: Not Supported 00:09:18.970 Delete NVM Set: Not Supported 00:09:18.970 Extended LBA Formats Supported: Supported 00:09:18.970 Flexible Data Placement Supported: Not Supported 00:09:18.970 00:09:18.970 Controller Memory Buffer Support 00:09:18.970 ================================ 00:09:18.970 Supported: No 00:09:18.970 00:09:18.970 Persistent Memory Region Support 00:09:18.970 ================================ 00:09:18.970 Supported: No 00:09:18.970 00:09:18.970 Admin Command Set Attributes 00:09:18.970 ============================ 00:09:18.970 Security Send/Receive: Not Supported 00:09:18.970 Format NVM: Supported 00:09:18.970 Firmware Activate/Download: Not Supported 00:09:18.970 Namespace Management: Supported 00:09:18.970 Device Self-Test: Not Supported 00:09:18.970 Directives: Supported 00:09:18.970 NVMe-MI: Not Supported 00:09:18.970 Virtualization Management: Not Supported 00:09:18.970 Doorbell Buffer Config: Supported 00:09:18.970 Get LBA Status Capability: Not Supported 00:09:18.970 Command & Feature Lockdown Capability: Not Supported 00:09:18.970 Abort Command Limit: 4 00:09:18.970 Async Event Request Limit: 4 00:09:18.970 Number of Firmware Slots: N/A 00:09:18.970 Firmware Slot 1 Read-Only: N/A 00:09:18.970 Firmware Activation Without Reset: N/A 00:09:18.970 Multiple Update Detection Support: N/A 00:09:18.970 Firmware Update Granularity: No Information Provided 00:09:18.970 Per-Namespace SMART Log: Yes 00:09:18.970 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.970 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:18.970 Command Effects Log Page: Supported 00:09:18.970 Get Log Page Extended Data: Supported 00:09:18.970 Telemetry Log Pages: Not Supported 00:09:18.970 Persistent Event Log Pages: Not Supported 00:09:18.970 Supported Log Pages Log Page: May Support 00:09:18.970 Commands Supported & Effects Log Page: Not Supported 00:09:18.970 Feature Identifiers & Effects Log Page:May Support 00:09:18.970 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.970 Data Area 4 for Telemetry Log: Not Supported 00:09:18.970 Error Log Page Entries Supported: 1 00:09:18.970 Keep Alive: Not Supported 00:09:18.970 00:09:18.970 NVM Command Set Attributes 00:09:18.971 ========================== 00:09:18.971 Submission Queue Entry Size 00:09:18.971 Max: 64 00:09:18.971 Min: 64 00:09:18.971 Completion Queue Entry Size 00:09:18.971 Max: 16 00:09:18.971 Min: 16 00:09:18.971 Number of Namespaces: 256 00:09:18.971 Compare Command: Supported 00:09:18.971 Write Uncorrectable Command: Not Supported 00:09:18.971 Dataset Management Command: Supported 00:09:18.971 Write Zeroes Command: Supported 00:09:18.971 Set Features Save Field: Supported 00:09:18.971 Reservations: Not Supported 00:09:18.971 Timestamp: Supported 00:09:18.971 Copy: Supported 00:09:18.971 Volatile Write Cache: Present 00:09:18.971 Atomic Write Unit (Normal): 1 00:09:18.971 Atomic Write Unit (PFail): 1 00:09:18.971 Atomic Compare & Write Unit: 1 00:09:18.971 Fused Compare & Write: Not Supported 00:09:18.971 Scatter-Gather List 00:09:18.971 SGL Command Set: Supported 00:09:18.971 SGL Keyed: Not Supported 00:09:18.971 SGL Bit Bucket Descriptor: Not Supported 00:09:18.971 SGL Metadata Pointer: Not Supported 00:09:18.971 Oversized SGL: Not Supported 00:09:18.971 SGL Metadata Address: Not Supported 00:09:18.971 SGL Offset: Not Supported 00:09:18.971 Transport SGL Data Block: Not Supported 00:09:18.971 Replay Protected Memory Block: Not Supported 00:09:18.971 00:09:18.971 Firmware Slot Information 00:09:18.971 ========================= 00:09:18.971 Active slot: 1 00:09:18.971 Slot 1 Firmware Revision: 1.0 00:09:18.971 00:09:18.971 00:09:18.971 Commands Supported and Effects 00:09:18.971 ============================== 00:09:18.971 Admin Commands 00:09:18.971 -------------- 00:09:18.971 Delete I/O Submission Queue (00h): Supported 00:09:18.971 Create I/O Submission Queue (01h): Supported 00:09:18.971 Get Log Page (02h): Supported 00:09:18.971 Delete I/O Completion Queue (04h): Supported 00:09:18.971 Create I/O Completion Queue (05h): Supported 00:09:18.971 Identify (06h): Supported 00:09:18.971 Abort (08h): Supported 00:09:18.971 Set Features (09h): Supported 00:09:18.971 Get Features (0Ah): Supported 00:09:18.971 Asynchronous Event Request (0Ch): Supported 00:09:18.971 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.971 Directive Send (19h): Supported 00:09:18.971 Directive Receive (1Ah): Supported 00:09:18.971 Virtualization Management (1Ch): Supported 00:09:18.971 Doorbell Buffer Config (7Ch): Supported 00:09:18.971 Format NVM (80h): Supported LBA-Change 00:09:18.971 I/O Commands 00:09:18.971 ------------ 00:09:18.971 Flush (00h): Supported LBA-Change 00:09:18.971 Write (01h): Supported LBA-Change 00:09:18.971 Read (02h): Supported 00:09:18.971 Compare (05h): Supported 00:09:18.971 Write Zeroes (08h): Supported LBA-Change 00:09:18.971 Dataset Management (09h): Supported LBA-Change 00:09:18.971 Unknown (0Ch): Supported 00:09:18.971 Unknown (12h): Supported 00:09:18.971 Copy (19h): Supported LBA-Change 00:09:18.971 Unknown (1Dh): Supported LBA-Change 00:09:18.971 00:09:18.971 Error Log 00:09:18.971 ========= 00:09:18.971 00:09:18.971 Arbitration 00:09:18.971 =========== 00:09:18.971 Arbitration Burst: no limit 00:09:18.971 00:09:18.971 Power Management 00:09:18.971 ================ 00:09:18.971 Number of Power States: 1 00:09:18.971 Current Power State: Power State #0 00:09:18.971 Power State #0: 00:09:18.971 Max Power: 25.00 W 00:09:18.971 Non-Operational State: Operational 00:09:18.971 Entry Latency: 16 microseconds 00:09:18.971 Exit Latency: 4 microseconds 00:09:18.971 Relative Read Throughput: 0 00:09:18.971 Relative Read Latency: 0 00:09:18.971 Relative Write Throughput: 0 00:09:18.971 Relative Write Latency: 0 00:09:19.231 Idle Power: Not Reported 00:09:19.231 Active Power: Not Reported 00:09:19.231 Non-Operational Permissive Mode: Not Supported 00:09:19.231 00:09:19.231 Health Information 00:09:19.231 ================== 00:09:19.231 Critical Warnings: 00:09:19.231 Available Spare Space: OK 00:09:19.231 Temperature: OK 00:09:19.231 Device Reliability: OK 00:09:19.231 Read Only: No 00:09:19.231 Volatile Memory Backup: OK 00:09:19.231 Current Temperature: 323 Kelvin (50 Celsius) 00:09:19.231 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:19.231 Available Spare: 0% 00:09:19.231 Available Spare Threshold: 0% 00:09:19.231 Life Percentage Used: 0% 00:09:19.231 Data Units Read: 1053 00:09:19.231 Data Units Written: 919 00:09:19.231 Host Read Commands: 58141 00:09:19.231 Host Write Commands: 56933 00:09:19.231 Controller Busy Time: 0 minutes 00:09:19.231 Power Cycles: 0 00:09:19.231 Power On Hours: 0 hours 00:09:19.231 Unsafe Shutdowns: 0 00:09:19.231 Unrecoverable Media Errors: 0 00:09:19.231 Lifetime Error Log Entries: 0 00:09:19.231 Warning Temperature Time: 0 minutes 00:09:19.231 Critical Temperature Time: 0 minutes 00:09:19.231 00:09:19.231 Number of Queues 00:09:19.231 ================ 00:09:19.231 Number of I/O Submission Queues: 64 00:09:19.231 Number of I/O Completion Queues: 64 00:09:19.231 00:09:19.231 ZNS Specific Controller Data 00:09:19.231 ============================ 00:09:19.231 Zone Append Size Limit: 0 00:09:19.231 00:09:19.231 00:09:19.231 Active Namespaces 00:09:19.231 ================= 00:09:19.231 Namespace ID:1 00:09:19.231 Error Recovery Timeout: Unlimited 00:09:19.231 Command Set Identifier: NVM (00h) 00:09:19.232 Deallocate: Supported 00:09:19.232 Deallocated/Unwritten Error: Supported 00:09:19.232 Deallocated Read Value: All 0x00 00:09:19.232 Deallocate in Write Zeroes: Not Supported 00:09:19.232 Deallocated Guard Field: 0xFFFF 00:09:19.232 Flush: Supported 00:09:19.232 Reservation: Not Supported 00:09:19.232 Namespace Sharing Capabilities: Private 00:09:19.232 Size (in LBAs): 1310720 (5GiB) 00:09:19.232 Capacity (in LBAs): 1310720 (5GiB) 00:09:19.232 Utilization (in LBAs): 1310720 (5GiB) 00:09:19.232 Thin Provisioning: Not Supported 00:09:19.232 Per-NS Atomic Units: No 00:09:19.232 Maximum Single Source Range Length: 128 00:09:19.232 Maximum Copy Length: 128 00:09:19.232 Maximum Source Range Count: 128 00:09:19.232 NGUID/EUI64 Never Reused: No 00:09:19.232 Namespace Write Protected: No 00:09:19.232 Number of LBA Formats: 8 00:09:19.232 Current LBA Format: LBA Format #04 00:09:19.232 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.232 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.232 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.232 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.232 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.232 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.232 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.232 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.232 00:09:19.232 NVM Specific Namespace Data 00:09:19.232 =========================== 00:09:19.232 Logical Block Storage Tag Mask: 0 00:09:19.232 Protection Information Capabilities: 00:09:19.232 16b Guard Protection Information Storage Tag Support: No 00:09:19.232 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.232 Storage Tag Check Read Support: No 00:09:19.232 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.232 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:19.232 11:04:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:19.232 ===================================================== 00:09:19.232 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.232 ===================================================== 00:09:19.232 Controller Capabilities/Features 00:09:19.232 ================================ 00:09:19.232 Vendor ID: 1b36 00:09:19.232 Subsystem Vendor ID: 1af4 00:09:19.232 Serial Number: 12342 00:09:19.232 Model Number: QEMU NVMe Ctrl 00:09:19.232 Firmware Version: 8.0.0 00:09:19.232 Recommended Arb Burst: 6 00:09:19.232 IEEE OUI Identifier: 00 54 52 00:09:19.232 Multi-path I/O 00:09:19.232 May have multiple subsystem ports: No 00:09:19.232 May have multiple controllers: No 00:09:19.232 Associated with SR-IOV VF: No 00:09:19.232 Max Data Transfer Size: 524288 00:09:19.232 Max Number of Namespaces: 256 00:09:19.232 Max Number of I/O Queues: 64 00:09:19.232 NVMe Specification Version (VS): 1.4 00:09:19.232 NVMe Specification Version (Identify): 1.4 00:09:19.232 Maximum Queue Entries: 2048 00:09:19.232 Contiguous Queues Required: Yes 00:09:19.232 Arbitration Mechanisms Supported 00:09:19.232 Weighted Round Robin: Not Supported 00:09:19.232 Vendor Specific: Not Supported 00:09:19.232 Reset Timeout: 7500 ms 00:09:19.232 Doorbell Stride: 4 bytes 00:09:19.232 NVM Subsystem Reset: Not Supported 00:09:19.232 Command Sets Supported 00:09:19.232 NVM Command Set: Supported 00:09:19.232 Boot Partition: Not Supported 00:09:19.232 Memory Page Size Minimum: 4096 bytes 00:09:19.232 Memory Page Size Maximum: 65536 bytes 00:09:19.232 Persistent Memory Region: Not Supported 00:09:19.232 Optional Asynchronous Events Supported 00:09:19.232 Namespace Attribute Notices: Supported 00:09:19.232 Firmware Activation Notices: Not Supported 00:09:19.232 ANA Change Notices: Not Supported 00:09:19.232 PLE Aggregate Log Change Notices: Not Supported 00:09:19.232 LBA Status Info Alert Notices: Not Supported 00:09:19.232 EGE Aggregate Log Change Notices: Not Supported 00:09:19.232 Normal NVM Subsystem Shutdown event: Not Supported 00:09:19.232 Zone Descriptor Change Notices: Not Supported 00:09:19.232 Discovery Log Change Notices: Not Supported 00:09:19.232 Controller Attributes 00:09:19.232 128-bit Host Identifier: Not Supported 00:09:19.232 Non-Operational Permissive Mode: Not Supported 00:09:19.232 NVM Sets: Not Supported 00:09:19.232 Read Recovery Levels: Not Supported 00:09:19.232 Endurance Groups: Not Supported 00:09:19.232 Predictable Latency Mode: Not Supported 00:09:19.232 Traffic Based Keep ALive: Not Supported 00:09:19.232 Namespace Granularity: Not Supported 00:09:19.232 SQ Associations: Not Supported 00:09:19.232 UUID List: Not Supported 00:09:19.232 Multi-Domain Subsystem: Not Supported 00:09:19.232 Fixed Capacity Management: Not Supported 00:09:19.232 Variable Capacity Management: Not Supported 00:09:19.232 Delete Endurance Group: Not Supported 00:09:19.232 Delete NVM Set: Not Supported 00:09:19.232 Extended LBA Formats Supported: Supported 00:09:19.232 Flexible Data Placement Supported: Not Supported 00:09:19.232 00:09:19.232 Controller Memory Buffer Support 00:09:19.232 ================================ 00:09:19.232 Supported: No 00:09:19.232 00:09:19.232 Persistent Memory Region Support 00:09:19.232 ================================ 00:09:19.232 Supported: No 00:09:19.232 00:09:19.232 Admin Command Set Attributes 00:09:19.232 ============================ 00:09:19.232 Security Send/Receive: Not Supported 00:09:19.232 Format NVM: Supported 00:09:19.232 Firmware Activate/Download: Not Supported 00:09:19.232 Namespace Management: Supported 00:09:19.232 Device Self-Test: Not Supported 00:09:19.232 Directives: Supported 00:09:19.232 NVMe-MI: Not Supported 00:09:19.232 Virtualization Management: Not Supported 00:09:19.232 Doorbell Buffer Config: Supported 00:09:19.232 Get LBA Status Capability: Not Supported 00:09:19.232 Command & Feature Lockdown Capability: Not Supported 00:09:19.232 Abort Command Limit: 4 00:09:19.232 Async Event Request Limit: 4 00:09:19.232 Number of Firmware Slots: N/A 00:09:19.232 Firmware Slot 1 Read-Only: N/A 00:09:19.232 Firmware Activation Without Reset: N/A 00:09:19.232 Multiple Update Detection Support: N/A 00:09:19.232 Firmware Update Granularity: No Information Provided 00:09:19.232 Per-Namespace SMART Log: Yes 00:09:19.232 Asymmetric Namespace Access Log Page: Not Supported 00:09:19.232 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:19.232 Command Effects Log Page: Supported 00:09:19.232 Get Log Page Extended Data: Supported 00:09:19.232 Telemetry Log Pages: Not Supported 00:09:19.232 Persistent Event Log Pages: Not Supported 00:09:19.232 Supported Log Pages Log Page: May Support 00:09:19.232 Commands Supported & Effects Log Page: Not Supported 00:09:19.232 Feature Identifiers & Effects Log Page:May Support 00:09:19.232 NVMe-MI Commands & Effects Log Page: May Support 00:09:19.232 Data Area 4 for Telemetry Log: Not Supported 00:09:19.232 Error Log Page Entries Supported: 1 00:09:19.232 Keep Alive: Not Supported 00:09:19.232 00:09:19.232 NVM Command Set Attributes 00:09:19.232 ========================== 00:09:19.232 Submission Queue Entry Size 00:09:19.232 Max: 64 00:09:19.232 Min: 64 00:09:19.232 Completion Queue Entry Size 00:09:19.232 Max: 16 00:09:19.232 Min: 16 00:09:19.232 Number of Namespaces: 256 00:09:19.232 Compare Command: Supported 00:09:19.232 Write Uncorrectable Command: Not Supported 00:09:19.232 Dataset Management Command: Supported 00:09:19.232 Write Zeroes Command: Supported 00:09:19.232 Set Features Save Field: Supported 00:09:19.232 Reservations: Not Supported 00:09:19.232 Timestamp: Supported 00:09:19.232 Copy: Supported 00:09:19.232 Volatile Write Cache: Present 00:09:19.232 Atomic Write Unit (Normal): 1 00:09:19.232 Atomic Write Unit (PFail): 1 00:09:19.232 Atomic Compare & Write Unit: 1 00:09:19.232 Fused Compare & Write: Not Supported 00:09:19.232 Scatter-Gather List 00:09:19.232 SGL Command Set: Supported 00:09:19.232 SGL Keyed: Not Supported 00:09:19.232 SGL Bit Bucket Descriptor: Not Supported 00:09:19.232 SGL Metadata Pointer: Not Supported 00:09:19.232 Oversized SGL: Not Supported 00:09:19.232 SGL Metadata Address: Not Supported 00:09:19.232 SGL Offset: Not Supported 00:09:19.232 Transport SGL Data Block: Not Supported 00:09:19.232 Replay Protected Memory Block: Not Supported 00:09:19.232 00:09:19.232 Firmware Slot Information 00:09:19.233 ========================= 00:09:19.233 Active slot: 1 00:09:19.233 Slot 1 Firmware Revision: 1.0 00:09:19.233 00:09:19.233 00:09:19.233 Commands Supported and Effects 00:09:19.233 ============================== 00:09:19.233 Admin Commands 00:09:19.233 -------------- 00:09:19.233 Delete I/O Submission Queue (00h): Supported 00:09:19.233 Create I/O Submission Queue (01h): Supported 00:09:19.233 Get Log Page (02h): Supported 00:09:19.233 Delete I/O Completion Queue (04h): Supported 00:09:19.233 Create I/O Completion Queue (05h): Supported 00:09:19.233 Identify (06h): Supported 00:09:19.233 Abort (08h): Supported 00:09:19.233 Set Features (09h): Supported 00:09:19.233 Get Features (0Ah): Supported 00:09:19.233 Asynchronous Event Request (0Ch): Supported 00:09:19.233 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:19.233 Directive Send (19h): Supported 00:09:19.233 Directive Receive (1Ah): Supported 00:09:19.233 Virtualization Management (1Ch): Supported 00:09:19.233 Doorbell Buffer Config (7Ch): Supported 00:09:19.233 Format NVM (80h): Supported LBA-Change 00:09:19.233 I/O Commands 00:09:19.233 ------------ 00:09:19.233 Flush (00h): Supported LBA-Change 00:09:19.233 Write (01h): Supported LBA-Change 00:09:19.233 Read (02h): Supported 00:09:19.233 Compare (05h): Supported 00:09:19.233 Write Zeroes (08h): Supported LBA-Change 00:09:19.233 Dataset Management (09h): Supported LBA-Change 00:09:19.233 Unknown (0Ch): Supported 00:09:19.233 Unknown (12h): Supported 00:09:19.233 Copy (19h): Supported LBA-Change 00:09:19.233 Unknown (1Dh): Supported LBA-Change 00:09:19.233 00:09:19.233 Error Log 00:09:19.233 ========= 00:09:19.233 00:09:19.233 Arbitration 00:09:19.233 =========== 00:09:19.233 Arbitration Burst: no limit 00:09:19.233 00:09:19.233 Power Management 00:09:19.233 ================ 00:09:19.233 Number of Power States: 1 00:09:19.233 Current Power State: Power State #0 00:09:19.233 Power State #0: 00:09:19.233 Max Power: 25.00 W 00:09:19.233 Non-Operational State: Operational 00:09:19.233 Entry Latency: 16 microseconds 00:09:19.233 Exit Latency: 4 microseconds 00:09:19.233 Relative Read Throughput: 0 00:09:19.233 Relative Read Latency: 0 00:09:19.233 Relative Write Throughput: 0 00:09:19.233 Relative Write Latency: 0 00:09:19.233 Idle Power: Not Reported 00:09:19.233 Active Power: Not Reported 00:09:19.233 Non-Operational Permissive Mode: Not Supported 00:09:19.233 00:09:19.233 Health Information 00:09:19.233 ================== 00:09:19.233 Critical Warnings: 00:09:19.233 Available Spare Space: OK 00:09:19.233 Temperature: OK 00:09:19.233 Device Reliability: OK 00:09:19.233 Read Only: No 00:09:19.233 Volatile Memory Backup: OK 00:09:19.233 Current Temperature: 323 Kelvin (50 Celsius) 00:09:19.233 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:19.233 Available Spare: 0% 00:09:19.233 Available Spare Threshold: 0% 00:09:19.233 Life Percentage Used: 0% 00:09:19.233 Data Units Read: 2210 00:09:19.233 Data Units Written: 1997 00:09:19.233 Host Read Commands: 119865 00:09:19.233 Host Write Commands: 118134 00:09:19.233 Controller Busy Time: 0 minutes 00:09:19.233 Power Cycles: 0 00:09:19.233 Power On Hours: 0 hours 00:09:19.233 Unsafe Shutdowns: 0 00:09:19.233 Unrecoverable Media Errors: 0 00:09:19.233 Lifetime Error Log Entries: 0 00:09:19.233 Warning Temperature Time: 0 minutes 00:09:19.233 Critical Temperature Time: 0 minutes 00:09:19.233 00:09:19.233 Number of Queues 00:09:19.233 ================ 00:09:19.233 Number of I/O Submission Queues: 64 00:09:19.233 Number of I/O Completion Queues: 64 00:09:19.233 00:09:19.233 ZNS Specific Controller Data 00:09:19.233 ============================ 00:09:19.233 Zone Append Size Limit: 0 00:09:19.233 00:09:19.233 00:09:19.233 Active Namespaces 00:09:19.233 ================= 00:09:19.233 Namespace ID:1 00:09:19.233 Error Recovery Timeout: Unlimited 00:09:19.233 Command Set Identifier: NVM (00h) 00:09:19.233 Deallocate: Supported 00:09:19.233 Deallocated/Unwritten Error: Supported 00:09:19.233 Deallocated Read Value: All 0x00 00:09:19.233 Deallocate in Write Zeroes: Not Supported 00:09:19.233 Deallocated Guard Field: 0xFFFF 00:09:19.233 Flush: Supported 00:09:19.233 Reservation: Not Supported 00:09:19.233 Namespace Sharing Capabilities: Private 00:09:19.233 Size (in LBAs): 1048576 (4GiB) 00:09:19.233 Capacity (in LBAs): 1048576 (4GiB) 00:09:19.233 Utilization (in LBAs): 1048576 (4GiB) 00:09:19.233 Thin Provisioning: Not Supported 00:09:19.233 Per-NS Atomic Units: No 00:09:19.233 Maximum Single Source Range Length: 128 00:09:19.233 Maximum Copy Length: 128 00:09:19.233 Maximum Source Range Count: 128 00:09:19.233 NGUID/EUI64 Never Reused: No 00:09:19.233 Namespace Write Protected: No 00:09:19.233 Number of LBA Formats: 8 00:09:19.233 Current LBA Format: LBA Format #04 00:09:19.233 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.233 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.233 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.233 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.233 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.233 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.233 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.233 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.233 00:09:19.233 NVM Specific Namespace Data 00:09:19.233 =========================== 00:09:19.233 Logical Block Storage Tag Mask: 0 00:09:19.233 Protection Information Capabilities: 00:09:19.233 16b Guard Protection Information Storage Tag Support: No 00:09:19.233 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.233 Storage Tag Check Read Support: No 00:09:19.233 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Namespace ID:2 00:09:19.233 Error Recovery Timeout: Unlimited 00:09:19.233 Command Set Identifier: NVM (00h) 00:09:19.233 Deallocate: Supported 00:09:19.233 Deallocated/Unwritten Error: Supported 00:09:19.233 Deallocated Read Value: All 0x00 00:09:19.233 Deallocate in Write Zeroes: Not Supported 00:09:19.233 Deallocated Guard Field: 0xFFFF 00:09:19.233 Flush: Supported 00:09:19.233 Reservation: Not Supported 00:09:19.233 Namespace Sharing Capabilities: Private 00:09:19.233 Size (in LBAs): 1048576 (4GiB) 00:09:19.233 Capacity (in LBAs): 1048576 (4GiB) 00:09:19.233 Utilization (in LBAs): 1048576 (4GiB) 00:09:19.233 Thin Provisioning: Not Supported 00:09:19.233 Per-NS Atomic Units: No 00:09:19.233 Maximum Single Source Range Length: 128 00:09:19.233 Maximum Copy Length: 128 00:09:19.233 Maximum Source Range Count: 128 00:09:19.233 NGUID/EUI64 Never Reused: No 00:09:19.233 Namespace Write Protected: No 00:09:19.233 Number of LBA Formats: 8 00:09:19.233 Current LBA Format: LBA Format #04 00:09:19.233 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.233 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.233 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.233 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.233 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.233 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.233 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.233 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.233 00:09:19.233 NVM Specific Namespace Data 00:09:19.233 =========================== 00:09:19.233 Logical Block Storage Tag Mask: 0 00:09:19.233 Protection Information Capabilities: 00:09:19.233 16b Guard Protection Information Storage Tag Support: No 00:09:19.233 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.233 Storage Tag Check Read Support: No 00:09:19.233 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.233 Namespace ID:3 00:09:19.233 Error Recovery Timeout: Unlimited 00:09:19.234 Command Set Identifier: NVM (00h) 00:09:19.234 Deallocate: Supported 00:09:19.234 Deallocated/Unwritten Error: Supported 00:09:19.234 Deallocated Read Value: All 0x00 00:09:19.234 Deallocate in Write Zeroes: Not Supported 00:09:19.234 Deallocated Guard Field: 0xFFFF 00:09:19.234 Flush: Supported 00:09:19.234 Reservation: Not Supported 00:09:19.234 Namespace Sharing Capabilities: Private 00:09:19.234 Size (in LBAs): 1048576 (4GiB) 00:09:19.234 Capacity (in LBAs): 1048576 (4GiB) 00:09:19.234 Utilization (in LBAs): 1048576 (4GiB) 00:09:19.234 Thin Provisioning: Not Supported 00:09:19.234 Per-NS Atomic Units: No 00:09:19.234 Maximum Single Source Range Length: 128 00:09:19.234 Maximum Copy Length: 128 00:09:19.234 Maximum Source Range Count: 128 00:09:19.234 NGUID/EUI64 Never Reused: No 00:09:19.234 Namespace Write Protected: No 00:09:19.234 Number of LBA Formats: 8 00:09:19.234 Current LBA Format: LBA Format #04 00:09:19.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.234 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.234 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.234 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.234 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.234 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.234 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.234 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.234 00:09:19.234 NVM Specific Namespace Data 00:09:19.234 =========================== 00:09:19.234 Logical Block Storage Tag Mask: 0 00:09:19.234 Protection Information Capabilities: 00:09:19.234 16b Guard Protection Information Storage Tag Support: No 00:09:19.234 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.234 Storage Tag Check Read Support: No 00:09:19.234 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.234 11:04:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:19.234 11:04:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:19.495 ===================================================== 00:09:19.495 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.495 ===================================================== 00:09:19.495 Controller Capabilities/Features 00:09:19.495 ================================ 00:09:19.495 Vendor ID: 1b36 00:09:19.495 Subsystem Vendor ID: 1af4 00:09:19.495 Serial Number: 12343 00:09:19.495 Model Number: QEMU NVMe Ctrl 00:09:19.495 Firmware Version: 8.0.0 00:09:19.495 Recommended Arb Burst: 6 00:09:19.495 IEEE OUI Identifier: 00 54 52 00:09:19.495 Multi-path I/O 00:09:19.495 May have multiple subsystem ports: No 00:09:19.495 May have multiple controllers: Yes 00:09:19.495 Associated with SR-IOV VF: No 00:09:19.495 Max Data Transfer Size: 524288 00:09:19.495 Max Number of Namespaces: 256 00:09:19.495 Max Number of I/O Queues: 64 00:09:19.495 NVMe Specification Version (VS): 1.4 00:09:19.495 NVMe Specification Version (Identify): 1.4 00:09:19.495 Maximum Queue Entries: 2048 00:09:19.495 Contiguous Queues Required: Yes 00:09:19.495 Arbitration Mechanisms Supported 00:09:19.495 Weighted Round Robin: Not Supported 00:09:19.495 Vendor Specific: Not Supported 00:09:19.495 Reset Timeout: 7500 ms 00:09:19.495 Doorbell Stride: 4 bytes 00:09:19.495 NVM Subsystem Reset: Not Supported 00:09:19.495 Command Sets Supported 00:09:19.495 NVM Command Set: Supported 00:09:19.495 Boot Partition: Not Supported 00:09:19.495 Memory Page Size Minimum: 4096 bytes 00:09:19.495 Memory Page Size Maximum: 65536 bytes 00:09:19.495 Persistent Memory Region: Not Supported 00:09:19.495 Optional Asynchronous Events Supported 00:09:19.495 Namespace Attribute Notices: Supported 00:09:19.495 Firmware Activation Notices: Not Supported 00:09:19.495 ANA Change Notices: Not Supported 00:09:19.495 PLE Aggregate Log Change Notices: Not Supported 00:09:19.495 LBA Status Info Alert Notices: Not Supported 00:09:19.495 EGE Aggregate Log Change Notices: Not Supported 00:09:19.495 Normal NVM Subsystem Shutdown event: Not Supported 00:09:19.495 Zone Descriptor Change Notices: Not Supported 00:09:19.495 Discovery Log Change Notices: Not Supported 00:09:19.495 Controller Attributes 00:09:19.495 128-bit Host Identifier: Not Supported 00:09:19.495 Non-Operational Permissive Mode: Not Supported 00:09:19.495 NVM Sets: Not Supported 00:09:19.495 Read Recovery Levels: Not Supported 00:09:19.495 Endurance Groups: Supported 00:09:19.495 Predictable Latency Mode: Not Supported 00:09:19.495 Traffic Based Keep ALive: Not Supported 00:09:19.495 Namespace Granularity: Not Supported 00:09:19.495 SQ Associations: Not Supported 00:09:19.495 UUID List: Not Supported 00:09:19.495 Multi-Domain Subsystem: Not Supported 00:09:19.495 Fixed Capacity Management: Not Supported 00:09:19.495 Variable Capacity Management: Not Supported 00:09:19.495 Delete Endurance Group: Not Supported 00:09:19.495 Delete NVM Set: Not Supported 00:09:19.495 Extended LBA Formats Supported: Supported 00:09:19.495 Flexible Data Placement Supported: Supported 00:09:19.495 00:09:19.495 Controller Memory Buffer Support 00:09:19.495 ================================ 00:09:19.495 Supported: No 00:09:19.495 00:09:19.495 Persistent Memory Region Support 00:09:19.495 ================================ 00:09:19.495 Supported: No 00:09:19.495 00:09:19.495 Admin Command Set Attributes 00:09:19.495 ============================ 00:09:19.495 Security Send/Receive: Not Supported 00:09:19.495 Format NVM: Supported 00:09:19.495 Firmware Activate/Download: Not Supported 00:09:19.495 Namespace Management: Supported 00:09:19.495 Device Self-Test: Not Supported 00:09:19.495 Directives: Supported 00:09:19.495 NVMe-MI: Not Supported 00:09:19.495 Virtualization Management: Not Supported 00:09:19.495 Doorbell Buffer Config: Supported 00:09:19.495 Get LBA Status Capability: Not Supported 00:09:19.495 Command & Feature Lockdown Capability: Not Supported 00:09:19.495 Abort Command Limit: 4 00:09:19.495 Async Event Request Limit: 4 00:09:19.495 Number of Firmware Slots: N/A 00:09:19.495 Firmware Slot 1 Read-Only: N/A 00:09:19.495 Firmware Activation Without Reset: N/A 00:09:19.495 Multiple Update Detection Support: N/A 00:09:19.495 Firmware Update Granularity: No Information Provided 00:09:19.495 Per-Namespace SMART Log: Yes 00:09:19.495 Asymmetric Namespace Access Log Page: Not Supported 00:09:19.495 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:19.495 Command Effects Log Page: Supported 00:09:19.495 Get Log Page Extended Data: Supported 00:09:19.495 Telemetry Log Pages: Not Supported 00:09:19.495 Persistent Event Log Pages: Not Supported 00:09:19.495 Supported Log Pages Log Page: May Support 00:09:19.495 Commands Supported & Effects Log Page: Not Supported 00:09:19.495 Feature Identifiers & Effects Log Page:May Support 00:09:19.495 NVMe-MI Commands & Effects Log Page: May Support 00:09:19.495 Data Area 4 for Telemetry Log: Not Supported 00:09:19.495 Error Log Page Entries Supported: 1 00:09:19.495 Keep Alive: Not Supported 00:09:19.495 00:09:19.495 NVM Command Set Attributes 00:09:19.495 ========================== 00:09:19.495 Submission Queue Entry Size 00:09:19.495 Max: 64 00:09:19.495 Min: 64 00:09:19.495 Completion Queue Entry Size 00:09:19.495 Max: 16 00:09:19.495 Min: 16 00:09:19.495 Number of Namespaces: 256 00:09:19.495 Compare Command: Supported 00:09:19.495 Write Uncorrectable Command: Not Supported 00:09:19.495 Dataset Management Command: Supported 00:09:19.495 Write Zeroes Command: Supported 00:09:19.495 Set Features Save Field: Supported 00:09:19.495 Reservations: Not Supported 00:09:19.495 Timestamp: Supported 00:09:19.495 Copy: Supported 00:09:19.495 Volatile Write Cache: Present 00:09:19.495 Atomic Write Unit (Normal): 1 00:09:19.495 Atomic Write Unit (PFail): 1 00:09:19.495 Atomic Compare & Write Unit: 1 00:09:19.495 Fused Compare & Write: Not Supported 00:09:19.495 Scatter-Gather List 00:09:19.495 SGL Command Set: Supported 00:09:19.495 SGL Keyed: Not Supported 00:09:19.495 SGL Bit Bucket Descriptor: Not Supported 00:09:19.495 SGL Metadata Pointer: Not Supported 00:09:19.495 Oversized SGL: Not Supported 00:09:19.495 SGL Metadata Address: Not Supported 00:09:19.495 SGL Offset: Not Supported 00:09:19.495 Transport SGL Data Block: Not Supported 00:09:19.495 Replay Protected Memory Block: Not Supported 00:09:19.495 00:09:19.495 Firmware Slot Information 00:09:19.495 ========================= 00:09:19.495 Active slot: 1 00:09:19.495 Slot 1 Firmware Revision: 1.0 00:09:19.495 00:09:19.495 00:09:19.495 Commands Supported and Effects 00:09:19.495 ============================== 00:09:19.495 Admin Commands 00:09:19.495 -------------- 00:09:19.495 Delete I/O Submission Queue (00h): Supported 00:09:19.495 Create I/O Submission Queue (01h): Supported 00:09:19.495 Get Log Page (02h): Supported 00:09:19.495 Delete I/O Completion Queue (04h): Supported 00:09:19.495 Create I/O Completion Queue (05h): Supported 00:09:19.495 Identify (06h): Supported 00:09:19.495 Abort (08h): Supported 00:09:19.495 Set Features (09h): Supported 00:09:19.495 Get Features (0Ah): Supported 00:09:19.495 Asynchronous Event Request (0Ch): Supported 00:09:19.495 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:19.495 Directive Send (19h): Supported 00:09:19.495 Directive Receive (1Ah): Supported 00:09:19.495 Virtualization Management (1Ch): Supported 00:09:19.495 Doorbell Buffer Config (7Ch): Supported 00:09:19.496 Format NVM (80h): Supported LBA-Change 00:09:19.496 I/O Commands 00:09:19.496 ------------ 00:09:19.496 Flush (00h): Supported LBA-Change 00:09:19.496 Write (01h): Supported LBA-Change 00:09:19.496 Read (02h): Supported 00:09:19.496 Compare (05h): Supported 00:09:19.496 Write Zeroes (08h): Supported LBA-Change 00:09:19.496 Dataset Management (09h): Supported LBA-Change 00:09:19.496 Unknown (0Ch): Supported 00:09:19.496 Unknown (12h): Supported 00:09:19.496 Copy (19h): Supported LBA-Change 00:09:19.496 Unknown (1Dh): Supported LBA-Change 00:09:19.496 00:09:19.496 Error Log 00:09:19.496 ========= 00:09:19.496 00:09:19.496 Arbitration 00:09:19.496 =========== 00:09:19.496 Arbitration Burst: no limit 00:09:19.496 00:09:19.496 Power Management 00:09:19.496 ================ 00:09:19.496 Number of Power States: 1 00:09:19.496 Current Power State: Power State #0 00:09:19.496 Power State #0: 00:09:19.496 Max Power: 25.00 W 00:09:19.496 Non-Operational State: Operational 00:09:19.496 Entry Latency: 16 microseconds 00:09:19.496 Exit Latency: 4 microseconds 00:09:19.496 Relative Read Throughput: 0 00:09:19.496 Relative Read Latency: 0 00:09:19.496 Relative Write Throughput: 0 00:09:19.496 Relative Write Latency: 0 00:09:19.496 Idle Power: Not Reported 00:09:19.496 Active Power: Not Reported 00:09:19.496 Non-Operational Permissive Mode: Not Supported 00:09:19.496 00:09:19.496 Health Information 00:09:19.496 ================== 00:09:19.496 Critical Warnings: 00:09:19.496 Available Spare Space: OK 00:09:19.496 Temperature: OK 00:09:19.496 Device Reliability: OK 00:09:19.496 Read Only: No 00:09:19.496 Volatile Memory Backup: OK 00:09:19.496 Current Temperature: 323 Kelvin (50 Celsius) 00:09:19.496 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:19.496 Available Spare: 0% 00:09:19.496 Available Spare Threshold: 0% 00:09:19.496 Life Percentage Used: 0% 00:09:19.496 Data Units Read: 831 00:09:19.496 Data Units Written: 760 00:09:19.496 Host Read Commands: 40833 00:09:19.496 Host Write Commands: 40256 00:09:19.496 Controller Busy Time: 0 minutes 00:09:19.496 Power Cycles: 0 00:09:19.496 Power On Hours: 0 hours 00:09:19.496 Unsafe Shutdowns: 0 00:09:19.496 Unrecoverable Media Errors: 0 00:09:19.496 Lifetime Error Log Entries: 0 00:09:19.496 Warning Temperature Time: 0 minutes 00:09:19.496 Critical Temperature Time: 0 minutes 00:09:19.496 00:09:19.496 Number of Queues 00:09:19.496 ================ 00:09:19.496 Number of I/O Submission Queues: 64 00:09:19.496 Number of I/O Completion Queues: 64 00:09:19.496 00:09:19.496 ZNS Specific Controller Data 00:09:19.496 ============================ 00:09:19.496 Zone Append Size Limit: 0 00:09:19.496 00:09:19.496 00:09:19.496 Active Namespaces 00:09:19.496 ================= 00:09:19.496 Namespace ID:1 00:09:19.496 Error Recovery Timeout: Unlimited 00:09:19.496 Command Set Identifier: NVM (00h) 00:09:19.496 Deallocate: Supported 00:09:19.496 Deallocated/Unwritten Error: Supported 00:09:19.496 Deallocated Read Value: All 0x00 00:09:19.496 Deallocate in Write Zeroes: Not Supported 00:09:19.496 Deallocated Guard Field: 0xFFFF 00:09:19.496 Flush: Supported 00:09:19.496 Reservation: Not Supported 00:09:19.496 Namespace Sharing Capabilities: Multiple Controllers 00:09:19.496 Size (in LBAs): 262144 (1GiB) 00:09:19.496 Capacity (in LBAs): 262144 (1GiB) 00:09:19.496 Utilization (in LBAs): 262144 (1GiB) 00:09:19.496 Thin Provisioning: Not Supported 00:09:19.496 Per-NS Atomic Units: No 00:09:19.496 Maximum Single Source Range Length: 128 00:09:19.496 Maximum Copy Length: 128 00:09:19.496 Maximum Source Range Count: 128 00:09:19.496 NGUID/EUI64 Never Reused: No 00:09:19.496 Namespace Write Protected: No 00:09:19.496 Endurance group ID: 1 00:09:19.496 Number of LBA Formats: 8 00:09:19.496 Current LBA Format: LBA Format #04 00:09:19.496 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.496 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.496 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.496 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.496 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.496 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.496 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.496 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.496 00:09:19.496 Get Feature FDP: 00:09:19.496 ================ 00:09:19.496 Enabled: Yes 00:09:19.496 FDP configuration index: 0 00:09:19.496 00:09:19.496 FDP configurations log page 00:09:19.496 =========================== 00:09:19.496 Number of FDP configurations: 1 00:09:19.496 Version: 0 00:09:19.496 Size: 112 00:09:19.496 FDP Configuration Descriptor: 0 00:09:19.496 Descriptor Size: 96 00:09:19.496 Reclaim Group Identifier format: 2 00:09:19.496 FDP Volatile Write Cache: Not Present 00:09:19.496 FDP Configuration: Valid 00:09:19.496 Vendor Specific Size: 0 00:09:19.496 Number of Reclaim Groups: 2 00:09:19.496 Number of Recalim Unit Handles: 8 00:09:19.496 Max Placement Identifiers: 128 00:09:19.496 Number of Namespaces Suppprted: 256 00:09:19.496 Reclaim unit Nominal Size: 6000000 bytes 00:09:19.496 Estimated Reclaim Unit Time Limit: Not Reported 00:09:19.496 RUH Desc #000: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #001: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #002: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #003: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #004: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #005: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #006: RUH Type: Initially Isolated 00:09:19.496 RUH Desc #007: RUH Type: Initially Isolated 00:09:19.496 00:09:19.496 FDP reclaim unit handle usage log page 00:09:19.496 ====================================== 00:09:19.496 Number of Reclaim Unit Handles: 8 00:09:19.496 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:19.496 RUH Usage Desc #001: RUH Attributes: Unused 00:09:19.496 RUH Usage Desc #002: RUH Attributes: Unused 00:09:19.496 RUH Usage Desc #003: RUH Attributes: Unused 00:09:19.496 RUH Usage Desc #004: RUH Attributes: Unused 00:09:19.496 RUH Usage Desc #005: RUH Attributes: Unused 00:09:19.496 RUH Usage Desc #006: RUH Attributes: Unused 00:09:19.496 RUH Usage Desc #007: RUH Attributes: Unused 00:09:19.496 00:09:19.496 FDP statistics log page 00:09:19.496 ======================= 00:09:19.496 Host bytes with metadata written: 477208576 00:09:19.496 Media bytes with metadata written: 477261824 00:09:19.496 Media bytes erased: 0 00:09:19.496 00:09:19.496 FDP events log page 00:09:19.496 =================== 00:09:19.496 Number of FDP events: 0 00:09:19.496 00:09:19.496 NVM Specific Namespace Data 00:09:19.496 =========================== 00:09:19.496 Logical Block Storage Tag Mask: 0 00:09:19.496 Protection Information Capabilities: 00:09:19.496 16b Guard Protection Information Storage Tag Support: No 00:09:19.496 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.496 Storage Tag Check Read Support: No 00:09:19.496 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.496 00:09:19.496 real 0m1.042s 00:09:19.496 user 0m0.359s 00:09:19.496 sys 0m0.477s 00:09:19.496 11:04:48 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.496 11:04:48 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:19.496 ************************************ 00:09:19.496 END TEST nvme_identify 00:09:19.496 ************************************ 00:09:19.496 11:04:48 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:19.496 11:04:48 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.496 11:04:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.496 11:04:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.496 ************************************ 00:09:19.496 START TEST nvme_perf 00:09:19.496 ************************************ 00:09:19.496 11:04:48 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:09:19.496 11:04:48 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:20.883 Initializing NVMe Controllers 00:09:20.883 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.883 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.883 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.883 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.883 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:20.883 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:20.883 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:20.883 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:20.883 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:20.883 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:20.883 Initialization complete. Launching workers. 00:09:20.883 ======================================================== 00:09:20.883 Latency(us) 00:09:20.883 Device Information : IOPS MiB/s Average min max 00:09:20.883 PCIE (0000:00:13.0) NSID 1 from core 0: 9016.19 105.66 14214.33 5640.63 32050.24 00:09:20.883 PCIE (0000:00:10.0) NSID 1 from core 0: 9016.19 105.66 14208.16 5548.47 32476.32 00:09:20.883 PCIE (0000:00:11.0) NSID 1 from core 0: 9016.19 105.66 14198.28 5665.41 32480.69 00:09:20.883 PCIE (0000:00:12.0) NSID 1 from core 0: 9016.19 105.66 14186.89 5637.80 32415.33 00:09:20.883 PCIE (0000:00:12.0) NSID 2 from core 0: 9016.19 105.66 14175.15 5633.66 32139.02 00:09:20.883 PCIE (0000:00:12.0) NSID 3 from core 0: 9080.14 106.41 14063.51 4605.78 25179.51 00:09:20.883 ======================================================== 00:09:20.883 Total : 54161.10 634.70 14174.26 4605.78 32480.69 00:09:20.883 00:09:20.883 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:20.883 ================================================================================= 00:09:20.883 1.00000% : 5797.415us 00:09:20.883 10.00000% : 6351.951us 00:09:20.883 25.00000% : 12653.489us 00:09:20.883 50.00000% : 15627.815us 00:09:20.883 75.00000% : 16938.535us 00:09:20.883 90.00000% : 18450.905us 00:09:20.883 95.00000% : 19156.677us 00:09:20.883 98.00000% : 19862.449us 00:09:20.883 99.00000% : 21878.942us 00:09:20.883 99.50000% : 30852.332us 00:09:20.883 99.90000% : 31860.578us 00:09:20.883 99.99000% : 32062.228us 00:09:20.883 99.99900% : 32062.228us 00:09:20.883 99.99990% : 32062.228us 00:09:20.883 99.99999% : 32062.228us 00:09:20.883 00:09:20.883 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:20.883 ================================================================================= 00:09:20.883 1.00000% : 5721.797us 00:09:20.883 10.00000% : 6402.363us 00:09:20.883 25.00000% : 12401.428us 00:09:20.883 50.00000% : 15325.342us 00:09:20.883 75.00000% : 16938.535us 00:09:20.883 90.00000% : 18652.554us 00:09:20.883 95.00000% : 19660.800us 00:09:20.883 98.00000% : 20669.046us 00:09:20.883 99.00000% : 22786.363us 00:09:20.883 99.50000% : 31255.631us 00:09:20.883 99.90000% : 32263.877us 00:09:20.883 99.99000% : 32667.175us 00:09:20.883 99.99900% : 32667.175us 00:09:20.883 99.99990% : 32667.175us 00:09:20.883 99.99999% : 32667.175us 00:09:20.883 00:09:20.883 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:20.883 ================================================================================= 00:09:20.883 1.00000% : 5797.415us 00:09:20.883 10.00000% : 6351.951us 00:09:20.883 25.00000% : 12401.428us 00:09:20.883 50.00000% : 15325.342us 00:09:20.883 75.00000% : 17140.185us 00:09:20.883 90.00000% : 18450.905us 00:09:20.883 95.00000% : 19660.800us 00:09:20.883 98.00000% : 20467.397us 00:09:20.883 99.00000% : 23088.837us 00:09:20.883 99.50000% : 31457.280us 00:09:20.883 99.90000% : 32263.877us 00:09:20.883 99.99000% : 32667.175us 00:09:20.883 99.99900% : 32667.175us 00:09:20.884 99.99990% : 32667.175us 00:09:20.884 99.99999% : 32667.175us 00:09:20.884 00:09:20.884 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:20.884 ================================================================================= 00:09:20.884 1.00000% : 5772.209us 00:09:20.884 10.00000% : 6351.951us 00:09:20.884 25.00000% : 12502.252us 00:09:20.884 50.00000% : 15526.991us 00:09:20.884 75.00000% : 16938.535us 00:09:20.884 90.00000% : 18551.729us 00:09:20.884 95.00000% : 19459.151us 00:09:20.884 98.00000% : 20164.923us 00:09:20.884 99.00000% : 23895.434us 00:09:20.884 99.50000% : 31860.578us 00:09:20.884 99.90000% : 32465.526us 00:09:20.884 99.99000% : 32465.526us 00:09:20.884 99.99900% : 32465.526us 00:09:20.884 99.99990% : 32465.526us 00:09:20.884 99.99999% : 32465.526us 00:09:20.884 00:09:20.884 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:20.884 ================================================================================= 00:09:20.884 1.00000% : 5797.415us 00:09:20.884 10.00000% : 6351.951us 00:09:20.884 25.00000% : 12653.489us 00:09:20.884 50.00000% : 15526.991us 00:09:20.884 75.00000% : 16938.535us 00:09:20.884 90.00000% : 18350.080us 00:09:20.884 95.00000% : 19156.677us 00:09:20.884 98.00000% : 20064.098us 00:09:20.884 99.00000% : 23996.258us 00:09:20.884 99.50000% : 31457.280us 00:09:20.884 99.90000% : 32062.228us 00:09:20.884 99.99000% : 32263.877us 00:09:20.884 99.99900% : 32263.877us 00:09:20.884 99.99990% : 32263.877us 00:09:20.884 99.99999% : 32263.877us 00:09:20.884 00:09:20.884 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:20.884 ================================================================================= 00:09:20.884 1.00000% : 5696.591us 00:09:20.884 10.00000% : 6326.745us 00:09:20.884 25.00000% : 12552.665us 00:09:20.884 50.00000% : 15627.815us 00:09:20.884 75.00000% : 16837.711us 00:09:20.884 90.00000% : 18148.431us 00:09:20.884 95.00000% : 18854.203us 00:09:20.884 98.00000% : 19660.800us 00:09:20.884 99.00000% : 20265.748us 00:09:20.884 99.50000% : 23996.258us 00:09:20.884 99.90000% : 25004.505us 00:09:20.884 99.99000% : 25206.154us 00:09:20.884 99.99900% : 25206.154us 00:09:20.884 99.99990% : 25206.154us 00:09:20.884 99.99999% : 25206.154us 00:09:20.884 00:09:20.884 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:20.884 ============================================================================== 00:09:20.884 Range in us Cumulative IO count 00:09:20.884 5620.972 - 5646.178: 0.0111% ( 1) 00:09:20.884 5646.178 - 5671.385: 0.0443% ( 3) 00:09:20.884 5671.385 - 5696.591: 0.1441% ( 9) 00:09:20.884 5696.591 - 5721.797: 0.3768% ( 21) 00:09:20.884 5721.797 - 5747.003: 0.6649% ( 26) 00:09:20.884 5747.003 - 5772.209: 0.9752% ( 28) 00:09:20.884 5772.209 - 5797.415: 1.2522% ( 25) 00:09:20.884 5797.415 - 5822.622: 1.5403% ( 26) 00:09:20.884 5822.622 - 5847.828: 1.8063% ( 24) 00:09:20.884 5847.828 - 5873.034: 2.1277% ( 29) 00:09:20.884 5873.034 - 5898.240: 2.5266% ( 36) 00:09:20.884 5898.240 - 5923.446: 2.8923% ( 33) 00:09:20.884 5923.446 - 5948.652: 3.3466% ( 41) 00:09:20.884 5948.652 - 5973.858: 3.7788% ( 39) 00:09:20.884 5973.858 - 5999.065: 4.1667% ( 35) 00:09:20.884 5999.065 - 6024.271: 4.6099% ( 40) 00:09:20.884 6024.271 - 6049.477: 4.9756% ( 33) 00:09:20.884 6049.477 - 6074.683: 5.3413% ( 33) 00:09:20.884 6074.683 - 6099.889: 5.7292% ( 35) 00:09:20.884 6099.889 - 6125.095: 6.2057% ( 43) 00:09:20.884 6125.095 - 6150.302: 6.6489% ( 40) 00:09:20.884 6150.302 - 6175.508: 7.1033% ( 41) 00:09:20.884 6175.508 - 6200.714: 7.5022% ( 36) 00:09:20.884 6200.714 - 6225.920: 7.9122% ( 37) 00:09:20.884 6225.920 - 6251.126: 8.3223% ( 37) 00:09:20.884 6251.126 - 6276.332: 8.7434% ( 38) 00:09:20.884 6276.332 - 6301.538: 9.1645% ( 38) 00:09:20.884 6301.538 - 6326.745: 9.5745% ( 37) 00:09:20.884 6326.745 - 6351.951: 10.0066% ( 39) 00:09:20.884 6351.951 - 6377.157: 10.4056% ( 36) 00:09:20.884 6377.157 - 6402.363: 10.8267% ( 38) 00:09:20.884 6402.363 - 6427.569: 11.2035% ( 34) 00:09:20.884 6427.569 - 6452.775: 11.6135% ( 37) 00:09:20.884 6452.775 - 6503.188: 12.3781% ( 69) 00:09:20.884 6503.188 - 6553.600: 12.9765% ( 54) 00:09:20.884 6553.600 - 6604.012: 13.4198% ( 40) 00:09:20.884 6604.012 - 6654.425: 13.8187% ( 36) 00:09:20.884 6654.425 - 6704.837: 14.1179% ( 27) 00:09:20.884 6704.837 - 6755.249: 14.2841% ( 15) 00:09:20.884 6755.249 - 6805.662: 14.4171% ( 12) 00:09:20.884 6805.662 - 6856.074: 14.5501% ( 12) 00:09:20.884 6856.074 - 6906.486: 14.6609% ( 10) 00:09:20.884 6906.486 - 6956.898: 14.7606% ( 9) 00:09:20.884 6956.898 - 7007.311: 14.8825% ( 11) 00:09:20.884 7007.311 - 7057.723: 14.9934% ( 10) 00:09:20.884 7057.723 - 7108.135: 15.1042% ( 10) 00:09:20.884 7108.135 - 7158.548: 15.1928% ( 8) 00:09:20.884 7158.548 - 7208.960: 15.2704% ( 7) 00:09:20.884 7208.960 - 7259.372: 15.3369% ( 6) 00:09:20.884 7259.372 - 7309.785: 15.4477% ( 10) 00:09:20.884 7309.785 - 7360.197: 15.5696% ( 11) 00:09:20.884 7360.197 - 7410.609: 15.6804% ( 10) 00:09:20.884 7410.609 - 7461.022: 15.7580% ( 7) 00:09:20.884 7461.022 - 7511.434: 15.8245% ( 6) 00:09:20.884 7511.434 - 7561.846: 15.8577% ( 3) 00:09:20.884 7561.846 - 7612.258: 15.9020% ( 4) 00:09:20.884 7612.258 - 7662.671: 15.9574% ( 5) 00:09:20.884 7662.671 - 7713.083: 16.0018% ( 4) 00:09:20.884 7713.083 - 7763.495: 16.0461% ( 4) 00:09:20.884 7763.495 - 7813.908: 16.0904% ( 4) 00:09:20.884 7813.908 - 7864.320: 16.1237% ( 3) 00:09:20.884 7864.320 - 7914.732: 16.1791% ( 5) 00:09:20.884 7914.732 - 7965.145: 16.2566% ( 7) 00:09:20.884 7965.145 - 8015.557: 16.3453% ( 8) 00:09:20.884 8015.557 - 8065.969: 16.4672% ( 11) 00:09:20.884 8065.969 - 8116.382: 16.5780% ( 10) 00:09:20.884 8116.382 - 8166.794: 16.6556% ( 7) 00:09:20.884 8166.794 - 8217.206: 16.7332% ( 7) 00:09:20.884 8217.206 - 8267.618: 16.8107% ( 7) 00:09:20.884 8267.618 - 8318.031: 16.8883% ( 7) 00:09:20.884 8318.031 - 8368.443: 16.9770% ( 8) 00:09:20.884 8368.443 - 8418.855: 17.0434% ( 6) 00:09:20.884 8418.855 - 8469.268: 17.1210% ( 7) 00:09:20.884 8469.268 - 8519.680: 17.2097% ( 8) 00:09:20.884 8519.680 - 8570.092: 17.2983% ( 8) 00:09:20.884 8570.092 - 8620.505: 17.4091% ( 10) 00:09:20.884 8620.505 - 8670.917: 17.5199% ( 10) 00:09:20.884 8670.917 - 8721.329: 17.6308% ( 10) 00:09:20.884 8721.329 - 8771.742: 17.7416% ( 10) 00:09:20.884 8771.742 - 8822.154: 17.8302% ( 8) 00:09:20.884 8822.154 - 8872.566: 17.9078% ( 7) 00:09:20.884 8872.566 - 8922.978: 17.9965% ( 8) 00:09:20.884 8922.978 - 8973.391: 18.0519% ( 5) 00:09:20.884 8973.391 - 9023.803: 18.1516% ( 9) 00:09:20.884 9023.803 - 9074.215: 18.2292% ( 7) 00:09:20.884 9074.215 - 9124.628: 18.3067% ( 7) 00:09:20.884 9124.628 - 9175.040: 18.3843% ( 7) 00:09:20.884 9175.040 - 9225.452: 18.4730% ( 8) 00:09:20.884 9225.452 - 9275.865: 18.5616% ( 8) 00:09:20.884 9275.865 - 9326.277: 18.6392% ( 7) 00:09:20.884 9326.277 - 9376.689: 18.8054% ( 15) 00:09:20.884 9376.689 - 9427.102: 18.9162% ( 10) 00:09:20.884 9427.102 - 9477.514: 19.0049% ( 8) 00:09:20.884 9477.514 - 9527.926: 19.1157% ( 10) 00:09:20.884 9527.926 - 9578.338: 19.2376% ( 11) 00:09:20.884 9578.338 - 9628.751: 19.3373% ( 9) 00:09:20.884 9628.751 - 9679.163: 19.4592% ( 11) 00:09:20.884 9679.163 - 9729.575: 19.5700% ( 10) 00:09:20.884 9729.575 - 9779.988: 19.6698% ( 9) 00:09:20.884 9779.988 - 9830.400: 19.7917% ( 11) 00:09:20.884 9830.400 - 9880.812: 19.9025% ( 10) 00:09:20.884 9880.812 - 9931.225: 19.9801% ( 7) 00:09:20.884 9931.225 - 9981.637: 20.0576% ( 7) 00:09:20.884 9981.637 - 10032.049: 20.1352% ( 7) 00:09:20.884 10032.049 - 10082.462: 20.2128% ( 7) 00:09:20.884 10082.462 - 10132.874: 20.2903% ( 7) 00:09:20.884 10132.874 - 10183.286: 20.3236% ( 3) 00:09:20.884 10183.286 - 10233.698: 20.3457% ( 2) 00:09:20.884 10233.698 - 10284.111: 20.4122% ( 6) 00:09:20.884 10284.111 - 10334.523: 20.4898% ( 7) 00:09:20.884 10334.523 - 10384.935: 20.5785% ( 8) 00:09:20.884 10384.935 - 10435.348: 20.6560% ( 7) 00:09:20.884 10435.348 - 10485.760: 20.7225% ( 6) 00:09:20.884 10485.760 - 10536.172: 20.8001% ( 7) 00:09:20.884 10536.172 - 10586.585: 20.8555% ( 5) 00:09:20.884 10586.585 - 10636.997: 20.8998% ( 4) 00:09:20.884 10636.997 - 10687.409: 20.9331% ( 3) 00:09:20.884 10687.409 - 10737.822: 20.9774% ( 4) 00:09:20.884 10737.822 - 10788.234: 21.0217% ( 4) 00:09:20.884 10788.234 - 10838.646: 21.0660% ( 4) 00:09:20.884 10838.646 - 10889.058: 21.1104% ( 4) 00:09:20.884 10889.058 - 10939.471: 21.1769% ( 6) 00:09:20.884 10939.471 - 10989.883: 21.2655% ( 8) 00:09:20.884 10989.883 - 11040.295: 21.3320% ( 6) 00:09:20.884 11040.295 - 11090.708: 21.4207% ( 8) 00:09:20.884 11090.708 - 11141.120: 21.4539% ( 3) 00:09:20.884 11141.120 - 11191.532: 21.4982% ( 4) 00:09:20.884 11191.532 - 11241.945: 21.5426% ( 4) 00:09:20.884 11241.945 - 11292.357: 21.5869% ( 4) 00:09:20.884 11292.357 - 11342.769: 21.6312% ( 4) 00:09:20.884 11342.769 - 11393.182: 21.6645% ( 3) 00:09:20.884 11393.182 - 11443.594: 21.7309% ( 6) 00:09:20.884 11443.594 - 11494.006: 21.8085% ( 7) 00:09:20.884 11494.006 - 11544.418: 21.8972% ( 8) 00:09:20.884 11544.418 - 11594.831: 21.9747% ( 7) 00:09:20.884 11594.831 - 11645.243: 22.0523% ( 7) 00:09:20.884 11645.243 - 11695.655: 22.1520% ( 9) 00:09:20.884 11695.655 - 11746.068: 22.2518% ( 9) 00:09:20.884 11746.068 - 11796.480: 22.3626% ( 10) 00:09:20.884 11796.480 - 11846.892: 22.4845% ( 11) 00:09:20.885 11846.892 - 11897.305: 22.5621% ( 7) 00:09:20.885 11897.305 - 11947.717: 22.6396% ( 7) 00:09:20.885 11947.717 - 11998.129: 22.7615% ( 11) 00:09:20.885 11998.129 - 12048.542: 22.9277% ( 15) 00:09:20.885 12048.542 - 12098.954: 23.0940% ( 15) 00:09:20.885 12098.954 - 12149.366: 23.2159% ( 11) 00:09:20.885 12149.366 - 12199.778: 23.3821% ( 15) 00:09:20.885 12199.778 - 12250.191: 23.5262% ( 13) 00:09:20.885 12250.191 - 12300.603: 23.7145% ( 17) 00:09:20.885 12300.603 - 12351.015: 23.9029% ( 17) 00:09:20.885 12351.015 - 12401.428: 24.0691% ( 15) 00:09:20.885 12401.428 - 12451.840: 24.2575% ( 17) 00:09:20.885 12451.840 - 12502.252: 24.4681% ( 19) 00:09:20.885 12502.252 - 12552.665: 24.6786% ( 19) 00:09:20.885 12552.665 - 12603.077: 24.9446% ( 24) 00:09:20.885 12603.077 - 12653.489: 25.2438% ( 27) 00:09:20.885 12653.489 - 12703.902: 25.5208% ( 25) 00:09:20.885 12703.902 - 12754.314: 25.8200% ( 27) 00:09:20.885 12754.314 - 12804.726: 26.1525% ( 30) 00:09:20.885 12804.726 - 12855.138: 26.4960% ( 31) 00:09:20.885 12855.138 - 12905.551: 26.7620% ( 24) 00:09:20.885 12905.551 - 13006.375: 27.3050% ( 49) 00:09:20.885 13006.375 - 13107.200: 27.8701% ( 51) 00:09:20.885 13107.200 - 13208.025: 28.5129% ( 58) 00:09:20.885 13208.025 - 13308.849: 29.1223% ( 55) 00:09:20.885 13308.849 - 13409.674: 29.7318% ( 55) 00:09:20.885 13409.674 - 13510.498: 30.2637% ( 48) 00:09:20.885 13510.498 - 13611.323: 30.8067% ( 49) 00:09:20.885 13611.323 - 13712.148: 31.3054% ( 45) 00:09:20.885 13712.148 - 13812.972: 31.8595% ( 50) 00:09:20.885 13812.972 - 13913.797: 32.5909% ( 66) 00:09:20.885 13913.797 - 14014.622: 33.6325% ( 94) 00:09:20.885 14014.622 - 14115.446: 34.5412% ( 82) 00:09:20.885 14115.446 - 14216.271: 35.4499% ( 82) 00:09:20.885 14216.271 - 14317.095: 36.1259% ( 61) 00:09:20.885 14317.095 - 14417.920: 36.8573% ( 66) 00:09:20.885 14417.920 - 14518.745: 37.6884% ( 75) 00:09:20.885 14518.745 - 14619.569: 38.3865% ( 63) 00:09:20.885 14619.569 - 14720.394: 39.2066% ( 74) 00:09:20.885 14720.394 - 14821.218: 40.0598% ( 77) 00:09:20.885 14821.218 - 14922.043: 40.9131% ( 77) 00:09:20.885 14922.043 - 15022.868: 41.9880% ( 97) 00:09:20.885 15022.868 - 15123.692: 43.1405% ( 104) 00:09:20.885 15123.692 - 15224.517: 44.5700% ( 129) 00:09:20.885 15224.517 - 15325.342: 46.2212% ( 149) 00:09:20.885 15325.342 - 15426.166: 47.7504% ( 138) 00:09:20.885 15426.166 - 15526.991: 49.4238% ( 151) 00:09:20.885 15526.991 - 15627.815: 51.4517% ( 183) 00:09:20.885 15627.815 - 15728.640: 53.6902% ( 202) 00:09:20.885 15728.640 - 15829.465: 55.9730% ( 206) 00:09:20.885 15829.465 - 15930.289: 58.3666% ( 216) 00:09:20.885 15930.289 - 16031.114: 60.9818% ( 236) 00:09:20.885 16031.114 - 16131.938: 63.0430% ( 186) 00:09:20.885 16131.938 - 16232.763: 64.8715% ( 165) 00:09:20.885 16232.763 - 16333.588: 66.6888% ( 164) 00:09:20.885 16333.588 - 16434.412: 68.3954% ( 154) 00:09:20.885 16434.412 - 16535.237: 70.1130% ( 155) 00:09:20.885 16535.237 - 16636.062: 71.5093% ( 126) 00:09:20.885 16636.062 - 16736.886: 72.8723% ( 123) 00:09:20.885 16736.886 - 16837.711: 74.5124% ( 148) 00:09:20.885 16837.711 - 16938.535: 76.0084% ( 135) 00:09:20.885 16938.535 - 17039.360: 77.4490% ( 130) 00:09:20.885 17039.360 - 17140.185: 78.8231% ( 124) 00:09:20.885 17140.185 - 17241.009: 80.1086% ( 116) 00:09:20.885 17241.009 - 17341.834: 81.2389% ( 102) 00:09:20.885 17341.834 - 17442.658: 82.3914% ( 104) 00:09:20.885 17442.658 - 17543.483: 83.3555% ( 87) 00:09:20.885 17543.483 - 17644.308: 84.3418% ( 89) 00:09:20.885 17644.308 - 17745.132: 85.2948% ( 86) 00:09:20.885 17745.132 - 17845.957: 86.0816% ( 71) 00:09:20.885 17845.957 - 17946.782: 86.9570% ( 79) 00:09:20.885 17946.782 - 18047.606: 87.6884% ( 66) 00:09:20.885 18047.606 - 18148.431: 88.3865% ( 63) 00:09:20.885 18148.431 - 18249.255: 89.0625% ( 61) 00:09:20.885 18249.255 - 18350.080: 89.7939% ( 66) 00:09:20.885 18350.080 - 18450.905: 90.5474% ( 68) 00:09:20.885 18450.905 - 18551.729: 91.2788% ( 66) 00:09:20.885 18551.729 - 18652.554: 92.0102% ( 66) 00:09:20.885 18652.554 - 18753.378: 92.7416% ( 66) 00:09:20.885 18753.378 - 18854.203: 93.4176% ( 61) 00:09:20.885 18854.203 - 18955.028: 93.9827% ( 51) 00:09:20.885 18955.028 - 19055.852: 94.5479% ( 51) 00:09:20.885 19055.852 - 19156.677: 95.0909% ( 49) 00:09:20.885 19156.677 - 19257.502: 95.7004% ( 55) 00:09:20.885 19257.502 - 19358.326: 96.2212% ( 47) 00:09:20.885 19358.326 - 19459.151: 96.6534% ( 39) 00:09:20.885 19459.151 - 19559.975: 97.0634% ( 37) 00:09:20.885 19559.975 - 19660.800: 97.4402% ( 34) 00:09:20.885 19660.800 - 19761.625: 97.8280% ( 35) 00:09:20.885 19761.625 - 19862.449: 98.0164% ( 17) 00:09:20.885 19862.449 - 19963.274: 98.1715% ( 14) 00:09:20.885 19963.274 - 20064.098: 98.2934% ( 11) 00:09:20.885 20064.098 - 20164.923: 98.3821% ( 8) 00:09:20.885 20164.923 - 20265.748: 98.4597% ( 7) 00:09:20.885 20265.748 - 20366.572: 98.5372% ( 7) 00:09:20.885 20366.572 - 20467.397: 98.5816% ( 4) 00:09:20.885 21273.994 - 21374.818: 98.6370% ( 5) 00:09:20.885 21374.818 - 21475.643: 98.7145% ( 7) 00:09:20.885 21475.643 - 21576.468: 98.7810% ( 6) 00:09:20.885 21576.468 - 21677.292: 98.8586% ( 7) 00:09:20.885 21677.292 - 21778.117: 98.9362% ( 7) 00:09:20.885 21778.117 - 21878.942: 99.0137% ( 7) 00:09:20.885 21878.942 - 21979.766: 99.0913% ( 7) 00:09:20.885 21979.766 - 22080.591: 99.1689% ( 7) 00:09:20.885 22080.591 - 22181.415: 99.2354% ( 6) 00:09:20.885 22181.415 - 22282.240: 99.2908% ( 5) 00:09:20.885 30247.385 - 30449.034: 99.3684% ( 7) 00:09:20.885 30449.034 - 30650.683: 99.4348% ( 6) 00:09:20.885 30650.683 - 30852.332: 99.5013% ( 6) 00:09:20.885 30852.332 - 31053.982: 99.5789% ( 7) 00:09:20.885 31053.982 - 31255.631: 99.6676% ( 8) 00:09:20.885 31255.631 - 31457.280: 99.7451% ( 7) 00:09:20.885 31457.280 - 31658.929: 99.8338% ( 8) 00:09:20.885 31658.929 - 31860.578: 99.9224% ( 8) 00:09:20.885 31860.578 - 32062.228: 100.0000% ( 7) 00:09:20.885 00:09:20.885 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:20.885 ============================================================================== 00:09:20.885 Range in us Cumulative IO count 00:09:20.885 5545.354 - 5570.560: 0.0332% ( 3) 00:09:20.885 5570.560 - 5595.766: 0.1662% ( 12) 00:09:20.885 5595.766 - 5620.972: 0.3324% ( 15) 00:09:20.885 5620.972 - 5646.178: 0.4543% ( 11) 00:09:20.885 5646.178 - 5671.385: 0.6981% ( 22) 00:09:20.885 5671.385 - 5696.591: 0.8754% ( 16) 00:09:20.885 5696.591 - 5721.797: 1.1525% ( 25) 00:09:20.885 5721.797 - 5747.003: 1.4738% ( 29) 00:09:20.885 5747.003 - 5772.209: 1.7730% ( 27) 00:09:20.885 5772.209 - 5797.415: 2.0390% ( 24) 00:09:20.885 5797.415 - 5822.622: 2.3936% ( 32) 00:09:20.885 5822.622 - 5847.828: 2.6596% ( 24) 00:09:20.885 5847.828 - 5873.034: 3.0474% ( 35) 00:09:20.885 5873.034 - 5898.240: 3.3577% ( 28) 00:09:20.885 5898.240 - 5923.446: 3.7345% ( 34) 00:09:20.885 5923.446 - 5948.652: 3.9783% ( 22) 00:09:20.885 5948.652 - 5973.858: 4.3883% ( 37) 00:09:20.885 5973.858 - 5999.065: 4.6432% ( 23) 00:09:20.885 5999.065 - 6024.271: 5.0421% ( 36) 00:09:20.885 6024.271 - 6049.477: 5.3635% ( 29) 00:09:20.885 6049.477 - 6074.683: 5.7181% ( 32) 00:09:20.885 6074.683 - 6099.889: 6.0727% ( 32) 00:09:20.885 6099.889 - 6125.095: 6.4495% ( 34) 00:09:20.885 6125.095 - 6150.302: 6.7487% ( 27) 00:09:20.885 6150.302 - 6175.508: 7.1144% ( 33) 00:09:20.885 6175.508 - 6200.714: 7.4357% ( 29) 00:09:20.885 6200.714 - 6225.920: 7.7349% ( 27) 00:09:20.885 6225.920 - 6251.126: 8.1228% ( 35) 00:09:20.885 6251.126 - 6276.332: 8.4441% ( 29) 00:09:20.885 6276.332 - 6301.538: 8.7988% ( 32) 00:09:20.885 6301.538 - 6326.745: 9.1423% ( 31) 00:09:20.885 6326.745 - 6351.951: 9.5191% ( 34) 00:09:20.885 6351.951 - 6377.157: 9.8404% ( 29) 00:09:20.885 6377.157 - 6402.363: 10.2283% ( 35) 00:09:20.885 6402.363 - 6427.569: 10.5829% ( 32) 00:09:20.885 6427.569 - 6452.775: 10.9707% ( 35) 00:09:20.885 6452.775 - 6503.188: 11.6800% ( 64) 00:09:20.885 6503.188 - 6553.600: 12.4113% ( 66) 00:09:20.885 6553.600 - 6604.012: 13.0762% ( 60) 00:09:20.885 6604.012 - 6654.425: 13.4863% ( 37) 00:09:20.885 6654.425 - 6704.837: 13.9074% ( 38) 00:09:20.885 6704.837 - 6755.249: 14.1512% ( 22) 00:09:20.885 6755.249 - 6805.662: 14.3395% ( 17) 00:09:20.885 6805.662 - 6856.074: 14.4725% ( 12) 00:09:20.885 6856.074 - 6906.486: 14.5833% ( 10) 00:09:20.885 6906.486 - 6956.898: 14.6720% ( 8) 00:09:20.885 6956.898 - 7007.311: 14.7939% ( 11) 00:09:20.885 7007.311 - 7057.723: 14.8604% ( 6) 00:09:20.885 7057.723 - 7108.135: 14.9379% ( 7) 00:09:20.885 7108.135 - 7158.548: 15.0044% ( 6) 00:09:20.885 7158.548 - 7208.960: 15.0598% ( 5) 00:09:20.885 7208.960 - 7259.372: 15.1485% ( 8) 00:09:20.885 7259.372 - 7309.785: 15.2039% ( 5) 00:09:20.885 7309.785 - 7360.197: 15.2704% ( 6) 00:09:20.885 7360.197 - 7410.609: 15.3480% ( 7) 00:09:20.885 7410.609 - 7461.022: 15.4477% ( 9) 00:09:20.885 7461.022 - 7511.434: 15.5585% ( 10) 00:09:20.885 7511.434 - 7561.846: 15.6804% ( 11) 00:09:20.885 7561.846 - 7612.258: 15.8134% ( 12) 00:09:20.885 7612.258 - 7662.671: 15.9574% ( 13) 00:09:20.885 7662.671 - 7713.083: 16.0793% ( 11) 00:09:20.885 7713.083 - 7763.495: 16.2234% ( 13) 00:09:20.885 7763.495 - 7813.908: 16.3121% ( 8) 00:09:20.886 7813.908 - 7864.320: 16.4340% ( 11) 00:09:20.886 7864.320 - 7914.732: 16.5226% ( 8) 00:09:20.886 7914.732 - 7965.145: 16.6002% ( 7) 00:09:20.886 7965.145 - 8015.557: 16.7110% ( 10) 00:09:20.886 8015.557 - 8065.969: 16.8107% ( 9) 00:09:20.886 8065.969 - 8116.382: 16.9326% ( 11) 00:09:20.886 8116.382 - 8166.794: 17.0102% ( 7) 00:09:20.886 8166.794 - 8217.206: 17.1099% ( 9) 00:09:20.886 8217.206 - 8267.618: 17.2207% ( 10) 00:09:20.886 8267.618 - 8318.031: 17.3094% ( 8) 00:09:20.886 8318.031 - 8368.443: 17.4091% ( 9) 00:09:20.886 8368.443 - 8418.855: 17.5199% ( 10) 00:09:20.886 8418.855 - 8469.268: 17.6086% ( 8) 00:09:20.886 8469.268 - 8519.680: 17.6862% ( 7) 00:09:20.886 8519.680 - 8570.092: 17.7305% ( 4) 00:09:20.886 8721.329 - 8771.742: 17.7637% ( 3) 00:09:20.886 8771.742 - 8822.154: 17.7970% ( 3) 00:09:20.886 8822.154 - 8872.566: 17.8302% ( 3) 00:09:20.886 8872.566 - 8922.978: 17.8746% ( 4) 00:09:20.886 8922.978 - 8973.391: 17.9410% ( 6) 00:09:20.886 8973.391 - 9023.803: 17.9965% ( 5) 00:09:20.886 9023.803 - 9074.215: 18.0629% ( 6) 00:09:20.886 9074.215 - 9124.628: 18.1073% ( 4) 00:09:20.886 9124.628 - 9175.040: 18.1959% ( 8) 00:09:20.886 9175.040 - 9225.452: 18.2735% ( 7) 00:09:20.886 9225.452 - 9275.865: 18.4286% ( 14) 00:09:20.886 9275.865 - 9326.277: 18.5173% ( 8) 00:09:20.886 9326.277 - 9376.689: 18.6613% ( 13) 00:09:20.886 9376.689 - 9427.102: 18.8054% ( 13) 00:09:20.886 9427.102 - 9477.514: 18.9273% ( 11) 00:09:20.886 9477.514 - 9527.926: 19.0603% ( 12) 00:09:20.886 9527.926 - 9578.338: 19.2043% ( 13) 00:09:20.886 9578.338 - 9628.751: 19.3373% ( 12) 00:09:20.886 9628.751 - 9679.163: 19.4592% ( 11) 00:09:20.886 9679.163 - 9729.575: 19.5922% ( 12) 00:09:20.886 9729.575 - 9779.988: 19.7030% ( 10) 00:09:20.886 9779.988 - 9830.400: 19.7917% ( 8) 00:09:20.886 9830.400 - 9880.812: 19.9136% ( 11) 00:09:20.886 9880.812 - 9931.225: 20.0022% ( 8) 00:09:20.886 9931.225 - 9981.637: 20.0798% ( 7) 00:09:20.886 9981.637 - 10032.049: 20.1795% ( 9) 00:09:20.886 10032.049 - 10082.462: 20.2793% ( 9) 00:09:20.886 10082.462 - 10132.874: 20.3568% ( 7) 00:09:20.886 10132.874 - 10183.286: 20.4455% ( 8) 00:09:20.886 10183.286 - 10233.698: 20.5230% ( 7) 00:09:20.886 10233.698 - 10284.111: 20.5674% ( 4) 00:09:20.886 10536.172 - 10586.585: 20.6117% ( 4) 00:09:20.886 10586.585 - 10636.997: 20.7225% ( 10) 00:09:20.886 10636.997 - 10687.409: 20.7890% ( 6) 00:09:20.886 10687.409 - 10737.822: 20.8777% ( 8) 00:09:20.886 10737.822 - 10788.234: 20.9331% ( 5) 00:09:20.886 10788.234 - 10838.646: 20.9663% ( 3) 00:09:20.886 10838.646 - 10889.058: 21.0882% ( 11) 00:09:20.886 10889.058 - 10939.471: 21.1436% ( 5) 00:09:20.886 10939.471 - 10989.883: 21.2323% ( 8) 00:09:20.886 10989.883 - 11040.295: 21.2988% ( 6) 00:09:20.886 11040.295 - 11090.708: 21.3763% ( 7) 00:09:20.886 11090.708 - 11141.120: 21.4428% ( 6) 00:09:20.886 11141.120 - 11191.532: 21.5315% ( 8) 00:09:20.886 11191.532 - 11241.945: 21.6312% ( 9) 00:09:20.886 11241.945 - 11292.357: 21.7531% ( 11) 00:09:20.886 11292.357 - 11342.769: 21.8861% ( 12) 00:09:20.886 11342.769 - 11393.182: 21.9747% ( 8) 00:09:20.886 11393.182 - 11443.594: 22.0191% ( 4) 00:09:20.886 11443.594 - 11494.006: 22.1631% ( 13) 00:09:20.886 11494.006 - 11544.418: 22.2629% ( 9) 00:09:20.886 11544.418 - 11594.831: 22.4291% ( 15) 00:09:20.886 11594.831 - 11645.243: 22.5288% ( 9) 00:09:20.886 11645.243 - 11695.655: 22.6618% ( 12) 00:09:20.886 11695.655 - 11746.068: 22.7726% ( 10) 00:09:20.886 11746.068 - 11796.480: 22.9167% ( 13) 00:09:20.886 11796.480 - 11846.892: 22.9942% ( 7) 00:09:20.886 11846.892 - 11897.305: 23.1605% ( 15) 00:09:20.886 11897.305 - 11947.717: 23.2491% ( 8) 00:09:20.886 11947.717 - 11998.129: 23.4153% ( 15) 00:09:20.886 11998.129 - 12048.542: 23.6370% ( 20) 00:09:20.886 12048.542 - 12098.954: 23.7921% ( 14) 00:09:20.886 12098.954 - 12149.366: 23.9916% ( 18) 00:09:20.886 12149.366 - 12199.778: 24.2021% ( 19) 00:09:20.886 12199.778 - 12250.191: 24.4348% ( 21) 00:09:20.886 12250.191 - 12300.603: 24.6454% ( 19) 00:09:20.886 12300.603 - 12351.015: 24.8338% ( 17) 00:09:20.886 12351.015 - 12401.428: 25.0332% ( 18) 00:09:20.886 12401.428 - 12451.840: 25.2105% ( 16) 00:09:20.886 12451.840 - 12502.252: 25.4100% ( 18) 00:09:20.886 12502.252 - 12552.665: 25.5652% ( 14) 00:09:20.886 12552.665 - 12603.077: 25.7757% ( 19) 00:09:20.886 12603.077 - 12653.489: 26.0971% ( 29) 00:09:20.886 12653.489 - 12703.902: 26.2744% ( 16) 00:09:20.886 12703.902 - 12754.314: 26.5071% ( 21) 00:09:20.886 12754.314 - 12804.726: 26.8063% ( 27) 00:09:20.886 12804.726 - 12855.138: 27.0390% ( 21) 00:09:20.886 12855.138 - 12905.551: 27.2828% ( 22) 00:09:20.886 12905.551 - 13006.375: 27.8147% ( 48) 00:09:20.886 13006.375 - 13107.200: 28.4131% ( 54) 00:09:20.886 13107.200 - 13208.025: 29.0891% ( 61) 00:09:20.886 13208.025 - 13308.849: 29.7540% ( 60) 00:09:20.886 13308.849 - 13409.674: 30.5075% ( 68) 00:09:20.886 13409.674 - 13510.498: 31.2500% ( 67) 00:09:20.886 13510.498 - 13611.323: 31.8152% ( 51) 00:09:20.886 13611.323 - 13712.148: 32.5244% ( 64) 00:09:20.886 13712.148 - 13812.972: 33.1782% ( 59) 00:09:20.886 13812.972 - 13913.797: 33.9096% ( 66) 00:09:20.886 13913.797 - 14014.622: 34.9291% ( 92) 00:09:20.886 14014.622 - 14115.446: 35.8488% ( 83) 00:09:20.886 14115.446 - 14216.271: 36.5581% ( 64) 00:09:20.886 14216.271 - 14317.095: 37.2673% ( 64) 00:09:20.886 14317.095 - 14417.920: 38.2646% ( 90) 00:09:20.886 14417.920 - 14518.745: 39.1179% ( 77) 00:09:20.886 14518.745 - 14619.569: 40.3036% ( 107) 00:09:20.886 14619.569 - 14720.394: 41.5337% ( 111) 00:09:20.886 14720.394 - 14821.218: 42.6751% ( 103) 00:09:20.886 14821.218 - 14922.043: 43.9827% ( 118) 00:09:20.886 14922.043 - 15022.868: 45.6449% ( 150) 00:09:20.886 15022.868 - 15123.692: 47.2850% ( 148) 00:09:20.886 15123.692 - 15224.517: 48.7145% ( 129) 00:09:20.886 15224.517 - 15325.342: 50.1330% ( 128) 00:09:20.886 15325.342 - 15426.166: 51.8063% ( 151) 00:09:20.886 15426.166 - 15526.991: 53.3577% ( 140) 00:09:20.886 15526.991 - 15627.815: 54.8094% ( 131) 00:09:20.886 15627.815 - 15728.640: 56.7487% ( 175) 00:09:20.886 15728.640 - 15829.465: 58.2225% ( 133) 00:09:20.886 15829.465 - 15930.289: 59.5191% ( 117) 00:09:20.886 15930.289 - 16031.114: 61.1370% ( 146) 00:09:20.886 16031.114 - 16131.938: 62.7216% ( 143) 00:09:20.886 16131.938 - 16232.763: 64.2509% ( 138) 00:09:20.886 16232.763 - 16333.588: 65.8245% ( 142) 00:09:20.886 16333.588 - 16434.412: 67.7416% ( 173) 00:09:20.886 16434.412 - 16535.237: 69.4481% ( 154) 00:09:20.886 16535.237 - 16636.062: 71.0550% ( 145) 00:09:20.886 16636.062 - 16736.886: 72.2407% ( 107) 00:09:20.886 16736.886 - 16837.711: 73.6924% ( 131) 00:09:20.886 16837.711 - 16938.535: 75.1884% ( 135) 00:09:20.886 16938.535 - 17039.360: 76.9393% ( 158) 00:09:20.886 17039.360 - 17140.185: 78.1250% ( 107) 00:09:20.886 17140.185 - 17241.009: 79.3329% ( 109) 00:09:20.886 17241.009 - 17341.834: 80.4078% ( 97) 00:09:20.886 17341.834 - 17442.658: 81.7043% ( 117) 00:09:20.886 17442.658 - 17543.483: 82.7349% ( 93) 00:09:20.886 17543.483 - 17644.308: 83.5439% ( 73) 00:09:20.886 17644.308 - 17745.132: 84.2199% ( 61) 00:09:20.886 17745.132 - 17845.957: 84.9512% ( 66) 00:09:20.886 17845.957 - 17946.782: 85.6826% ( 66) 00:09:20.886 17946.782 - 18047.606: 86.3032% ( 56) 00:09:20.886 18047.606 - 18148.431: 87.0789% ( 70) 00:09:20.886 18148.431 - 18249.255: 87.6330% ( 50) 00:09:20.886 18249.255 - 18350.080: 88.0541% ( 38) 00:09:20.886 18350.080 - 18450.905: 89.0403% ( 89) 00:09:20.886 18450.905 - 18551.729: 89.5612% ( 47) 00:09:20.886 18551.729 - 18652.554: 90.0598% ( 45) 00:09:20.886 18652.554 - 18753.378: 90.9796% ( 83) 00:09:20.886 18753.378 - 18854.203: 91.5669% ( 53) 00:09:20.886 18854.203 - 18955.028: 92.0545% ( 44) 00:09:20.886 18955.028 - 19055.852: 92.6973% ( 58) 00:09:20.886 19055.852 - 19156.677: 93.0408% ( 31) 00:09:20.886 19156.677 - 19257.502: 93.5062% ( 42) 00:09:20.886 19257.502 - 19358.326: 93.9384% ( 39) 00:09:20.886 19358.326 - 19459.151: 94.5146% ( 52) 00:09:20.886 19459.151 - 19559.975: 94.9136% ( 36) 00:09:20.886 19559.975 - 19660.800: 95.2460% ( 30) 00:09:20.886 19660.800 - 19761.625: 95.5895% ( 31) 00:09:20.886 19761.625 - 19862.449: 96.0882% ( 45) 00:09:20.886 19862.449 - 19963.274: 96.3209% ( 21) 00:09:20.886 19963.274 - 20064.098: 96.9193% ( 54) 00:09:20.886 20064.098 - 20164.923: 97.1520% ( 21) 00:09:20.886 20164.923 - 20265.748: 97.3737% ( 20) 00:09:20.886 20265.748 - 20366.572: 97.5621% ( 17) 00:09:20.886 20366.572 - 20467.397: 97.7283% ( 15) 00:09:20.886 20467.397 - 20568.222: 97.9832% ( 23) 00:09:20.886 20568.222 - 20669.046: 98.0940% ( 10) 00:09:20.886 20669.046 - 20769.871: 98.2824% ( 17) 00:09:20.886 20769.871 - 20870.695: 98.3599% ( 7) 00:09:20.886 20870.695 - 20971.520: 98.4375% ( 7) 00:09:20.886 20971.520 - 21072.345: 98.5816% ( 13) 00:09:20.886 21273.994 - 21374.818: 98.5926% ( 1) 00:09:20.886 21374.818 - 21475.643: 98.6037% ( 1) 00:09:20.886 21475.643 - 21576.468: 98.6370% ( 3) 00:09:20.886 21576.468 - 21677.292: 98.6924% ( 5) 00:09:20.886 21677.292 - 21778.117: 98.7145% ( 2) 00:09:20.886 21778.117 - 21878.942: 98.7367% ( 2) 00:09:20.886 21878.942 - 21979.766: 98.7810% ( 4) 00:09:20.886 21979.766 - 22080.591: 98.8032% ( 2) 00:09:20.886 22080.591 - 22181.415: 98.8364% ( 3) 00:09:20.886 22181.415 - 22282.240: 98.8808% ( 4) 00:09:20.886 22282.240 - 22383.065: 98.9029% ( 2) 00:09:20.886 22383.065 - 22483.889: 98.9251% ( 2) 00:09:20.886 22483.889 - 22584.714: 98.9583% ( 3) 00:09:20.886 22584.714 - 22685.538: 98.9694% ( 1) 00:09:20.886 22685.538 - 22786.363: 99.0470% ( 7) 00:09:20.887 22786.363 - 22887.188: 99.0581% ( 1) 00:09:20.887 22887.188 - 22988.012: 99.1024% ( 4) 00:09:20.887 22988.012 - 23088.837: 99.1356% ( 3) 00:09:20.887 23088.837 - 23189.662: 99.1578% ( 2) 00:09:20.887 23189.662 - 23290.486: 99.2132% ( 5) 00:09:20.887 23290.486 - 23391.311: 99.2575% ( 4) 00:09:20.887 23391.311 - 23492.135: 99.2686% ( 1) 00:09:20.887 23492.135 - 23592.960: 99.2908% ( 2) 00:09:20.887 30449.034 - 30650.683: 99.3462% ( 5) 00:09:20.887 30650.683 - 30852.332: 99.4127% ( 6) 00:09:20.887 30852.332 - 31053.982: 99.4792% ( 6) 00:09:20.887 31053.982 - 31255.631: 99.5457% ( 6) 00:09:20.887 31255.631 - 31457.280: 99.6232% ( 7) 00:09:20.887 31457.280 - 31658.929: 99.6676% ( 4) 00:09:20.887 31658.929 - 31860.578: 99.7562% ( 8) 00:09:20.887 31860.578 - 32062.228: 99.8670% ( 10) 00:09:20.887 32062.228 - 32263.877: 99.9113% ( 4) 00:09:20.887 32263.877 - 32465.526: 99.9778% ( 6) 00:09:20.887 32465.526 - 32667.175: 100.0000% ( 2) 00:09:20.887 00:09:20.887 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:20.887 ============================================================================== 00:09:20.887 Range in us Cumulative IO count 00:09:20.887 5646.178 - 5671.385: 0.0554% ( 5) 00:09:20.887 5671.385 - 5696.591: 0.1995% ( 13) 00:09:20.887 5696.591 - 5721.797: 0.4654% ( 24) 00:09:20.887 5721.797 - 5747.003: 0.6871% ( 20) 00:09:20.887 5747.003 - 5772.209: 0.9752% ( 26) 00:09:20.887 5772.209 - 5797.415: 1.2079% ( 21) 00:09:20.887 5797.415 - 5822.622: 1.5071% ( 27) 00:09:20.887 5822.622 - 5847.828: 1.8174% ( 28) 00:09:20.887 5847.828 - 5873.034: 2.1609% ( 31) 00:09:20.887 5873.034 - 5898.240: 2.6042% ( 40) 00:09:20.887 5898.240 - 5923.446: 3.0253% ( 38) 00:09:20.887 5923.446 - 5948.652: 3.4685% ( 40) 00:09:20.887 5948.652 - 5973.858: 3.8564% ( 35) 00:09:20.887 5973.858 - 5999.065: 4.2332% ( 34) 00:09:20.887 5999.065 - 6024.271: 4.6875% ( 41) 00:09:20.887 6024.271 - 6049.477: 5.1418% ( 41) 00:09:20.887 6049.477 - 6074.683: 5.5519% ( 37) 00:09:20.887 6074.683 - 6099.889: 5.9951% ( 40) 00:09:20.887 6099.889 - 6125.095: 6.3719% ( 34) 00:09:20.887 6125.095 - 6150.302: 6.8373% ( 42) 00:09:20.887 6150.302 - 6175.508: 7.2363% ( 36) 00:09:20.887 6175.508 - 6200.714: 7.6352% ( 36) 00:09:20.887 6200.714 - 6225.920: 8.0452% ( 37) 00:09:20.887 6225.920 - 6251.126: 8.4441% ( 36) 00:09:20.887 6251.126 - 6276.332: 8.8542% ( 37) 00:09:20.887 6276.332 - 6301.538: 9.2199% ( 33) 00:09:20.887 6301.538 - 6326.745: 9.6410% ( 38) 00:09:20.887 6326.745 - 6351.951: 10.0399% ( 36) 00:09:20.887 6351.951 - 6377.157: 10.4388% ( 36) 00:09:20.887 6377.157 - 6402.363: 10.9043% ( 42) 00:09:20.887 6402.363 - 6427.569: 11.3475% ( 40) 00:09:20.887 6427.569 - 6452.775: 11.7465% ( 36) 00:09:20.887 6452.775 - 6503.188: 12.5887% ( 76) 00:09:20.887 6503.188 - 6553.600: 13.2425% ( 59) 00:09:20.887 6553.600 - 6604.012: 13.7079% ( 42) 00:09:20.887 6604.012 - 6654.425: 14.0293% ( 29) 00:09:20.887 6654.425 - 6704.837: 14.2176% ( 17) 00:09:20.887 6704.837 - 6755.249: 14.3728% ( 14) 00:09:20.887 6755.249 - 6805.662: 14.4947% ( 11) 00:09:20.887 6805.662 - 6856.074: 14.6055% ( 10) 00:09:20.887 6856.074 - 6906.486: 14.7163% ( 10) 00:09:20.887 6906.486 - 6956.898: 14.7939% ( 7) 00:09:20.887 6956.898 - 7007.311: 14.8936% ( 9) 00:09:20.887 7007.311 - 7057.723: 14.9601% ( 6) 00:09:20.887 7057.723 - 7108.135: 15.0377% ( 7) 00:09:20.887 7108.135 - 7158.548: 15.1374% ( 9) 00:09:20.887 7158.548 - 7208.960: 15.3036% ( 15) 00:09:20.887 7208.960 - 7259.372: 15.4477% ( 13) 00:09:20.887 7259.372 - 7309.785: 15.5807% ( 12) 00:09:20.887 7309.785 - 7360.197: 15.7026% ( 11) 00:09:20.887 7360.197 - 7410.609: 15.8355% ( 12) 00:09:20.887 7410.609 - 7461.022: 15.9464% ( 10) 00:09:20.887 7461.022 - 7511.434: 16.0793% ( 12) 00:09:20.887 7511.434 - 7561.846: 16.2012% ( 11) 00:09:20.887 7561.846 - 7612.258: 16.3121% ( 10) 00:09:20.887 7612.258 - 7662.671: 16.4229% ( 10) 00:09:20.887 7662.671 - 7713.083: 16.5448% ( 11) 00:09:20.887 7713.083 - 7763.495: 16.6334% ( 8) 00:09:20.887 7763.495 - 7813.908: 16.7221% ( 8) 00:09:20.887 7813.908 - 7864.320: 16.7886% ( 6) 00:09:20.887 7864.320 - 7914.732: 16.8772% ( 8) 00:09:20.887 7914.732 - 7965.145: 16.9659% ( 8) 00:09:20.887 7965.145 - 8015.557: 17.0213% ( 5) 00:09:20.887 8015.557 - 8065.969: 17.0324% ( 1) 00:09:20.887 8065.969 - 8116.382: 17.0767% ( 4) 00:09:20.887 8116.382 - 8166.794: 17.0988% ( 2) 00:09:20.887 8166.794 - 8217.206: 17.1432% ( 4) 00:09:20.887 8217.206 - 8267.618: 17.1986% ( 5) 00:09:20.887 8267.618 - 8318.031: 17.2429% ( 4) 00:09:20.887 8318.031 - 8368.443: 17.2762% ( 3) 00:09:20.887 8368.443 - 8418.855: 17.3205% ( 4) 00:09:20.887 8418.855 - 8469.268: 17.3648% ( 4) 00:09:20.887 8469.268 - 8519.680: 17.4091% ( 4) 00:09:20.887 8519.680 - 8570.092: 17.4535% ( 4) 00:09:20.887 8570.092 - 8620.505: 17.4978% ( 4) 00:09:20.887 8620.505 - 8670.917: 17.5310% ( 3) 00:09:20.887 8670.917 - 8721.329: 17.5754% ( 4) 00:09:20.887 8721.329 - 8771.742: 17.6197% ( 4) 00:09:20.887 8771.742 - 8822.154: 17.6973% ( 7) 00:09:20.887 8822.154 - 8872.566: 17.7748% ( 7) 00:09:20.887 8872.566 - 8922.978: 17.8524% ( 7) 00:09:20.887 8922.978 - 8973.391: 17.9300% ( 7) 00:09:20.887 8973.391 - 9023.803: 18.0408% ( 10) 00:09:20.887 9023.803 - 9074.215: 18.1627% ( 11) 00:09:20.887 9074.215 - 9124.628: 18.2846% ( 11) 00:09:20.887 9124.628 - 9175.040: 18.4286% ( 13) 00:09:20.887 9175.040 - 9225.452: 18.5727% ( 13) 00:09:20.887 9225.452 - 9275.865: 18.6835% ( 10) 00:09:20.887 9275.865 - 9326.277: 18.8387% ( 14) 00:09:20.887 9326.277 - 9376.689: 19.0049% ( 15) 00:09:20.887 9376.689 - 9427.102: 19.1711% ( 15) 00:09:20.887 9427.102 - 9477.514: 19.3373% ( 15) 00:09:20.887 9477.514 - 9527.926: 19.5146% ( 16) 00:09:20.887 9527.926 - 9578.338: 19.6587% ( 13) 00:09:20.887 9578.338 - 9628.751: 19.8471% ( 17) 00:09:20.887 9628.751 - 9679.163: 19.9690% ( 11) 00:09:20.887 9679.163 - 9729.575: 20.0687% ( 9) 00:09:20.887 9729.575 - 9779.988: 20.1574% ( 8) 00:09:20.887 9779.988 - 9830.400: 20.2460% ( 8) 00:09:20.887 9830.400 - 9880.812: 20.3568% ( 10) 00:09:20.887 9880.812 - 9931.225: 20.4566% ( 9) 00:09:20.887 9931.225 - 9981.637: 20.5452% ( 8) 00:09:20.887 9981.637 - 10032.049: 20.6228% ( 7) 00:09:20.887 10032.049 - 10082.462: 20.6893% ( 6) 00:09:20.887 10082.462 - 10132.874: 20.7779% ( 8) 00:09:20.887 10132.874 - 10183.286: 20.8555% ( 7) 00:09:20.887 10183.286 - 10233.698: 20.8887% ( 3) 00:09:20.887 10233.698 - 10284.111: 20.9331% ( 4) 00:09:20.887 10284.111 - 10334.523: 20.9774% ( 4) 00:09:20.887 10334.523 - 10384.935: 21.0217% ( 4) 00:09:20.887 10384.935 - 10435.348: 21.0771% ( 5) 00:09:20.887 10435.348 - 10485.760: 21.1658% ( 8) 00:09:20.887 10485.760 - 10536.172: 21.2434% ( 7) 00:09:20.887 10536.172 - 10586.585: 21.3320% ( 8) 00:09:20.887 10586.585 - 10636.997: 21.4096% ( 7) 00:09:20.887 10636.997 - 10687.409: 21.4871% ( 7) 00:09:20.887 10687.409 - 10737.822: 21.5536% ( 6) 00:09:20.887 10737.822 - 10788.234: 21.5869% ( 3) 00:09:20.887 10788.234 - 10838.646: 21.6201% ( 3) 00:09:20.887 10838.646 - 10889.058: 21.6645% ( 4) 00:09:20.887 10889.058 - 10939.471: 21.7088% ( 4) 00:09:20.887 10939.471 - 10989.883: 21.7420% ( 3) 00:09:20.887 10989.883 - 11040.295: 21.8307% ( 8) 00:09:20.887 11040.295 - 11090.708: 21.9082% ( 7) 00:09:20.887 11090.708 - 11141.120: 21.9969% ( 8) 00:09:20.887 11141.120 - 11191.532: 22.0634% ( 6) 00:09:20.887 11191.532 - 11241.945: 22.1299% ( 6) 00:09:20.887 11241.945 - 11292.357: 22.1964% ( 6) 00:09:20.887 11292.357 - 11342.769: 22.2518% ( 5) 00:09:20.887 11342.769 - 11393.182: 22.2739% ( 2) 00:09:20.887 11393.182 - 11443.594: 22.3072% ( 3) 00:09:20.887 11443.594 - 11494.006: 22.3404% ( 3) 00:09:20.887 11494.006 - 11544.418: 22.3848% ( 4) 00:09:20.887 11544.418 - 11594.831: 22.4623% ( 7) 00:09:20.887 11594.831 - 11645.243: 22.5510% ( 8) 00:09:20.887 11645.243 - 11695.655: 22.6840% ( 12) 00:09:20.887 11695.655 - 11746.068: 22.7948% ( 10) 00:09:20.887 11746.068 - 11796.480: 22.9056% ( 10) 00:09:20.887 11796.480 - 11846.892: 23.0496% ( 13) 00:09:20.887 11846.892 - 11897.305: 23.2491% ( 18) 00:09:20.887 11897.305 - 11947.717: 23.4043% ( 14) 00:09:20.887 11947.717 - 11998.129: 23.5705% ( 15) 00:09:20.887 11998.129 - 12048.542: 23.7478% ( 16) 00:09:20.887 12048.542 - 12098.954: 23.8918% ( 13) 00:09:20.887 12098.954 - 12149.366: 24.0359% ( 13) 00:09:20.887 12149.366 - 12199.778: 24.2243% ( 17) 00:09:20.887 12199.778 - 12250.191: 24.4570% ( 21) 00:09:20.887 12250.191 - 12300.603: 24.7119% ( 23) 00:09:20.887 12300.603 - 12351.015: 24.9446% ( 21) 00:09:20.887 12351.015 - 12401.428: 25.2327% ( 26) 00:09:20.887 12401.428 - 12451.840: 25.4987% ( 24) 00:09:20.887 12451.840 - 12502.252: 25.7757% ( 25) 00:09:20.887 12502.252 - 12552.665: 26.0527% ( 25) 00:09:20.887 12552.665 - 12603.077: 26.3409% ( 26) 00:09:20.887 12603.077 - 12653.489: 26.5847% ( 22) 00:09:20.887 12653.489 - 12703.902: 26.8174% ( 21) 00:09:20.887 12703.902 - 12754.314: 27.0058% ( 17) 00:09:20.887 12754.314 - 12804.726: 27.2052% ( 18) 00:09:20.887 12804.726 - 12855.138: 27.3936% ( 17) 00:09:20.888 12855.138 - 12905.551: 27.5931% ( 18) 00:09:20.888 12905.551 - 13006.375: 28.1472% ( 50) 00:09:20.888 13006.375 - 13107.200: 28.8896% ( 67) 00:09:20.888 13107.200 - 13208.025: 29.5878% ( 63) 00:09:20.888 13208.025 - 13308.849: 30.2748% ( 62) 00:09:20.888 13308.849 - 13409.674: 30.9176% ( 58) 00:09:20.888 13409.674 - 13510.498: 31.5603% ( 58) 00:09:20.888 13510.498 - 13611.323: 32.1254% ( 51) 00:09:20.888 13611.323 - 13712.148: 32.7682% ( 58) 00:09:20.888 13712.148 - 13812.972: 33.4552% ( 62) 00:09:20.888 13812.972 - 13913.797: 34.1312% ( 61) 00:09:20.888 13913.797 - 14014.622: 34.7739% ( 58) 00:09:20.888 14014.622 - 14115.446: 35.5386% ( 69) 00:09:20.888 14115.446 - 14216.271: 36.3364% ( 72) 00:09:20.888 14216.271 - 14317.095: 37.0678% ( 66) 00:09:20.888 14317.095 - 14417.920: 37.7660% ( 63) 00:09:20.888 14417.920 - 14518.745: 38.8963% ( 102) 00:09:20.888 14518.745 - 14619.569: 40.0377% ( 103) 00:09:20.888 14619.569 - 14720.394: 41.3010% ( 114) 00:09:20.888 14720.394 - 14821.218: 42.4756% ( 106) 00:09:20.888 14821.218 - 14922.043: 43.7943% ( 119) 00:09:20.888 14922.043 - 15022.868: 45.2017% ( 127) 00:09:20.888 15022.868 - 15123.692: 46.8196% ( 146) 00:09:20.888 15123.692 - 15224.517: 48.3821% ( 141) 00:09:20.888 15224.517 - 15325.342: 50.0887% ( 154) 00:09:20.888 15325.342 - 15426.166: 51.7841% ( 153) 00:09:20.888 15426.166 - 15526.991: 53.4464% ( 150) 00:09:20.888 15526.991 - 15627.815: 55.1529% ( 154) 00:09:20.888 15627.815 - 15728.640: 56.9149% ( 159) 00:09:20.888 15728.640 - 15829.465: 58.5771% ( 150) 00:09:20.888 15829.465 - 15930.289: 60.0621% ( 134) 00:09:20.888 15930.289 - 16031.114: 61.5470% ( 134) 00:09:20.888 16031.114 - 16131.938: 62.9654% ( 128) 00:09:20.888 16131.938 - 16232.763: 64.5168% ( 140) 00:09:20.888 16232.763 - 16333.588: 65.9574% ( 130) 00:09:20.888 16333.588 - 16434.412: 67.4535% ( 135) 00:09:20.888 16434.412 - 16535.237: 68.8387% ( 125) 00:09:20.888 16535.237 - 16636.062: 70.0133% ( 106) 00:09:20.888 16636.062 - 16736.886: 71.0993% ( 98) 00:09:20.888 16736.886 - 16837.711: 72.3404% ( 112) 00:09:20.888 16837.711 - 16938.535: 73.5262% ( 107) 00:09:20.888 16938.535 - 17039.360: 74.8227% ( 117) 00:09:20.888 17039.360 - 17140.185: 76.1082% ( 116) 00:09:20.888 17140.185 - 17241.009: 77.3715% ( 114) 00:09:20.888 17241.009 - 17341.834: 78.6126% ( 112) 00:09:20.888 17341.834 - 17442.658: 79.7983% ( 107) 00:09:20.888 17442.658 - 17543.483: 81.1170% ( 119) 00:09:20.888 17543.483 - 17644.308: 82.4468% ( 120) 00:09:20.888 17644.308 - 17745.132: 83.6769% ( 111) 00:09:20.888 17745.132 - 17845.957: 84.8737% ( 108) 00:09:20.888 17845.957 - 17946.782: 86.0262% ( 104) 00:09:20.888 17946.782 - 18047.606: 87.0678% ( 94) 00:09:20.888 18047.606 - 18148.431: 87.8657% ( 72) 00:09:20.888 18148.431 - 18249.255: 88.6525% ( 71) 00:09:20.888 18249.255 - 18350.080: 89.3617% ( 64) 00:09:20.888 18350.080 - 18450.905: 90.0155% ( 59) 00:09:20.888 18450.905 - 18551.729: 90.5253% ( 46) 00:09:20.888 18551.729 - 18652.554: 90.9242% ( 36) 00:09:20.888 18652.554 - 18753.378: 91.3675% ( 40) 00:09:20.888 18753.378 - 18854.203: 91.7886% ( 38) 00:09:20.888 18854.203 - 18955.028: 92.3094% ( 47) 00:09:20.888 18955.028 - 19055.852: 92.7859% ( 43) 00:09:20.888 19055.852 - 19156.677: 93.1959% ( 37) 00:09:20.888 19156.677 - 19257.502: 93.5949% ( 36) 00:09:20.888 19257.502 - 19358.326: 94.1822% ( 53) 00:09:20.888 19358.326 - 19459.151: 94.6365% ( 41) 00:09:20.888 19459.151 - 19559.975: 94.9911% ( 32) 00:09:20.888 19559.975 - 19660.800: 95.3679% ( 34) 00:09:20.888 19660.800 - 19761.625: 95.8333% ( 42) 00:09:20.888 19761.625 - 19862.449: 96.3209% ( 44) 00:09:20.888 19862.449 - 19963.274: 96.9304% ( 55) 00:09:20.888 19963.274 - 20064.098: 97.3183% ( 35) 00:09:20.888 20064.098 - 20164.923: 97.6064% ( 26) 00:09:20.888 20164.923 - 20265.748: 97.7837% ( 16) 00:09:20.888 20265.748 - 20366.572: 97.9610% ( 16) 00:09:20.888 20366.572 - 20467.397: 98.1161% ( 14) 00:09:20.888 20467.397 - 20568.222: 98.2270% ( 10) 00:09:20.888 20568.222 - 20669.046: 98.3156% ( 8) 00:09:20.888 20669.046 - 20769.871: 98.3599% ( 4) 00:09:20.888 20769.871 - 20870.695: 98.4043% ( 4) 00:09:20.888 20870.695 - 20971.520: 98.4486% ( 4) 00:09:20.888 20971.520 - 21072.345: 98.4929% ( 4) 00:09:20.888 21072.345 - 21173.169: 98.5372% ( 4) 00:09:20.888 21173.169 - 21273.994: 98.5705% ( 3) 00:09:20.888 21273.994 - 21374.818: 98.5816% ( 1) 00:09:20.888 21878.942 - 21979.766: 98.6037% ( 2) 00:09:20.888 21979.766 - 22080.591: 98.6480% ( 4) 00:09:20.888 22080.591 - 22181.415: 98.6924% ( 4) 00:09:20.888 22181.415 - 22282.240: 98.7367% ( 4) 00:09:20.888 22282.240 - 22383.065: 98.7699% ( 3) 00:09:20.888 22383.065 - 22483.889: 98.8143% ( 4) 00:09:20.888 22483.889 - 22584.714: 98.8586% ( 4) 00:09:20.888 22584.714 - 22685.538: 98.8918% ( 3) 00:09:20.888 22685.538 - 22786.363: 98.9362% ( 4) 00:09:20.888 22786.363 - 22887.188: 98.9694% ( 3) 00:09:20.888 22887.188 - 22988.012: 98.9916% ( 2) 00:09:20.888 22988.012 - 23088.837: 99.0359% ( 4) 00:09:20.888 23088.837 - 23189.662: 99.0691% ( 3) 00:09:20.888 23189.662 - 23290.486: 99.1135% ( 4) 00:09:20.888 23290.486 - 23391.311: 99.1578% ( 4) 00:09:20.888 23391.311 - 23492.135: 99.2021% ( 4) 00:09:20.888 23492.135 - 23592.960: 99.2465% ( 4) 00:09:20.888 23592.960 - 23693.785: 99.2908% ( 4) 00:09:20.888 30852.332 - 31053.982: 99.3684% ( 7) 00:09:20.888 31053.982 - 31255.631: 99.4348% ( 6) 00:09:20.888 31255.631 - 31457.280: 99.5235% ( 8) 00:09:20.888 31457.280 - 31658.929: 99.6121% ( 8) 00:09:20.888 31658.929 - 31860.578: 99.7119% ( 9) 00:09:20.888 31860.578 - 32062.228: 99.8005% ( 8) 00:09:20.888 32062.228 - 32263.877: 99.9003% ( 9) 00:09:20.888 32263.877 - 32465.526: 99.9889% ( 8) 00:09:20.888 32465.526 - 32667.175: 100.0000% ( 1) 00:09:20.888 00:09:20.888 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:20.888 ============================================================================== 00:09:20.888 Range in us Cumulative IO count 00:09:20.888 5620.972 - 5646.178: 0.0222% ( 2) 00:09:20.888 5646.178 - 5671.385: 0.0776% ( 5) 00:09:20.888 5671.385 - 5696.591: 0.2992% ( 20) 00:09:20.888 5696.591 - 5721.797: 0.5541% ( 23) 00:09:20.888 5721.797 - 5747.003: 0.7203% ( 15) 00:09:20.888 5747.003 - 5772.209: 1.0195% ( 27) 00:09:20.888 5772.209 - 5797.415: 1.3076% ( 26) 00:09:20.888 5797.415 - 5822.622: 1.6844% ( 34) 00:09:20.888 5822.622 - 5847.828: 1.9836% ( 27) 00:09:20.888 5847.828 - 5873.034: 2.3050% ( 29) 00:09:20.888 5873.034 - 5898.240: 2.7039% ( 36) 00:09:20.888 5898.240 - 5923.446: 3.1250% ( 38) 00:09:20.888 5923.446 - 5948.652: 3.5350% ( 37) 00:09:20.888 5948.652 - 5973.858: 3.9007% ( 33) 00:09:20.888 5973.858 - 5999.065: 4.3440% ( 40) 00:09:20.888 5999.065 - 6024.271: 4.7762% ( 39) 00:09:20.888 6024.271 - 6049.477: 5.2083% ( 39) 00:09:20.888 6049.477 - 6074.683: 5.6516% ( 40) 00:09:20.888 6074.683 - 6099.889: 6.0838% ( 39) 00:09:20.888 6099.889 - 6125.095: 6.4827% ( 36) 00:09:20.888 6125.095 - 6150.302: 6.8816% ( 36) 00:09:20.889 6150.302 - 6175.508: 7.2806% ( 36) 00:09:20.889 6175.508 - 6200.714: 7.6574% ( 34) 00:09:20.889 6200.714 - 6225.920: 8.0895% ( 39) 00:09:20.889 6225.920 - 6251.126: 8.4441% ( 32) 00:09:20.889 6251.126 - 6276.332: 8.8652% ( 38) 00:09:20.889 6276.332 - 6301.538: 9.2753% ( 37) 00:09:20.889 6301.538 - 6326.745: 9.7185% ( 40) 00:09:20.889 6326.745 - 6351.951: 10.1396% ( 38) 00:09:20.889 6351.951 - 6377.157: 10.5718% ( 39) 00:09:20.889 6377.157 - 6402.363: 11.0262% ( 41) 00:09:20.889 6402.363 - 6427.569: 11.4916% ( 42) 00:09:20.889 6427.569 - 6452.775: 11.9348% ( 40) 00:09:20.889 6452.775 - 6503.188: 12.7660% ( 75) 00:09:20.889 6503.188 - 6553.600: 13.3754% ( 55) 00:09:20.889 6553.600 - 6604.012: 13.9406% ( 51) 00:09:20.889 6604.012 - 6654.425: 14.2841% ( 31) 00:09:20.889 6654.425 - 6704.837: 14.6498% ( 33) 00:09:20.889 6704.837 - 6755.249: 14.9490% ( 27) 00:09:20.889 6755.249 - 6805.662: 15.1485% ( 18) 00:09:20.889 6805.662 - 6856.074: 15.2815% ( 12) 00:09:20.889 6856.074 - 6906.486: 15.4588% ( 16) 00:09:20.889 6906.486 - 6956.898: 15.6250% ( 15) 00:09:20.889 6956.898 - 7007.311: 15.7691% ( 13) 00:09:20.889 7007.311 - 7057.723: 15.9131% ( 13) 00:09:20.889 7057.723 - 7108.135: 16.0461% ( 12) 00:09:20.889 7108.135 - 7158.548: 16.2012% ( 14) 00:09:20.889 7158.548 - 7208.960: 16.3453% ( 13) 00:09:20.889 7208.960 - 7259.372: 16.4672% ( 11) 00:09:20.889 7259.372 - 7309.785: 16.5891% ( 11) 00:09:20.889 7309.785 - 7360.197: 16.6777% ( 8) 00:09:20.889 7360.197 - 7410.609: 16.7442% ( 6) 00:09:20.889 7410.609 - 7461.022: 16.8218% ( 7) 00:09:20.889 7461.022 - 7511.434: 16.8883% ( 6) 00:09:20.889 7511.434 - 7561.846: 16.9548% ( 6) 00:09:20.889 7561.846 - 7612.258: 16.9991% ( 4) 00:09:20.889 7612.258 - 7662.671: 17.0324% ( 3) 00:09:20.889 7662.671 - 7713.083: 17.0434% ( 1) 00:09:20.889 7713.083 - 7763.495: 17.0767% ( 3) 00:09:20.889 7763.495 - 7813.908: 17.0988% ( 2) 00:09:20.889 7813.908 - 7864.320: 17.1210% ( 2) 00:09:20.889 7864.320 - 7914.732: 17.1432% ( 2) 00:09:20.889 7914.732 - 7965.145: 17.1653% ( 2) 00:09:20.889 7965.145 - 8015.557: 17.1875% ( 2) 00:09:20.889 8015.557 - 8065.969: 17.1986% ( 1) 00:09:20.889 8065.969 - 8116.382: 17.2318% ( 3) 00:09:20.889 8116.382 - 8166.794: 17.2540% ( 2) 00:09:20.889 8166.794 - 8217.206: 17.2762% ( 2) 00:09:20.889 8217.206 - 8267.618: 17.3537% ( 7) 00:09:20.889 8267.618 - 8318.031: 17.4313% ( 7) 00:09:20.889 8318.031 - 8368.443: 17.5089% ( 7) 00:09:20.889 8368.443 - 8418.855: 17.5864% ( 7) 00:09:20.889 8418.855 - 8469.268: 17.6529% ( 6) 00:09:20.889 8469.268 - 8519.680: 17.7305% ( 7) 00:09:20.889 8519.680 - 8570.092: 17.7970% ( 6) 00:09:20.889 8570.092 - 8620.505: 17.8746% ( 7) 00:09:20.889 8620.505 - 8670.917: 17.9632% ( 8) 00:09:20.889 8670.917 - 8721.329: 18.0297% ( 6) 00:09:20.889 8721.329 - 8771.742: 18.0962% ( 6) 00:09:20.889 8771.742 - 8822.154: 18.1294% ( 3) 00:09:20.889 8822.154 - 8872.566: 18.1627% ( 3) 00:09:20.889 8872.566 - 8922.978: 18.2181% ( 5) 00:09:20.889 8922.978 - 8973.391: 18.2846% ( 6) 00:09:20.889 8973.391 - 9023.803: 18.3511% ( 6) 00:09:20.889 9023.803 - 9074.215: 18.4176% ( 6) 00:09:20.889 9074.215 - 9124.628: 18.5062% ( 8) 00:09:20.889 9124.628 - 9175.040: 18.5727% ( 6) 00:09:20.889 9175.040 - 9225.452: 18.6281% ( 5) 00:09:20.889 9225.452 - 9275.865: 18.6835% ( 5) 00:09:20.889 9275.865 - 9326.277: 18.7500% ( 6) 00:09:20.889 9326.277 - 9376.689: 18.8054% ( 5) 00:09:20.889 9376.689 - 9427.102: 18.9162% ( 10) 00:09:20.889 9427.102 - 9477.514: 19.0270% ( 10) 00:09:20.889 9477.514 - 9527.926: 19.1489% ( 11) 00:09:20.889 9527.926 - 9578.338: 19.2708% ( 11) 00:09:20.889 9578.338 - 9628.751: 19.3816% ( 10) 00:09:20.889 9628.751 - 9679.163: 19.5257% ( 13) 00:09:20.889 9679.163 - 9729.575: 19.6809% ( 14) 00:09:20.889 9729.575 - 9779.988: 19.8138% ( 12) 00:09:20.889 9779.988 - 9830.400: 19.9246% ( 10) 00:09:20.889 9830.400 - 9880.812: 20.0465% ( 11) 00:09:20.889 9880.812 - 9931.225: 20.1684% ( 11) 00:09:20.889 9931.225 - 9981.637: 20.2793% ( 10) 00:09:20.889 9981.637 - 10032.049: 20.4012% ( 11) 00:09:20.889 10032.049 - 10082.462: 20.5230% ( 11) 00:09:20.889 10082.462 - 10132.874: 20.6449% ( 11) 00:09:20.889 10132.874 - 10183.286: 20.7779% ( 12) 00:09:20.889 10183.286 - 10233.698: 20.8777% ( 9) 00:09:20.889 10233.698 - 10284.111: 20.9441% ( 6) 00:09:20.889 10284.111 - 10334.523: 21.0106% ( 6) 00:09:20.889 10334.523 - 10384.935: 21.0771% ( 6) 00:09:20.889 10384.935 - 10435.348: 21.1104% ( 3) 00:09:20.889 10435.348 - 10485.760: 21.1325% ( 2) 00:09:20.889 10485.760 - 10536.172: 21.1547% ( 2) 00:09:20.889 10536.172 - 10586.585: 21.1769% ( 2) 00:09:20.889 10586.585 - 10636.997: 21.1990% ( 2) 00:09:20.889 10636.997 - 10687.409: 21.2323% ( 3) 00:09:20.889 10687.409 - 10737.822: 21.3098% ( 7) 00:09:20.889 10737.822 - 10788.234: 21.4207% ( 10) 00:09:20.889 10788.234 - 10838.646: 21.5093% ( 8) 00:09:20.889 10838.646 - 10889.058: 21.5536% ( 4) 00:09:20.889 10889.058 - 10939.471: 21.6201% ( 6) 00:09:20.889 10939.471 - 10989.883: 21.6755% ( 5) 00:09:20.889 10989.883 - 11040.295: 21.7309% ( 5) 00:09:20.889 11040.295 - 11090.708: 21.8085% ( 7) 00:09:20.889 11090.708 - 11141.120: 21.8861% ( 7) 00:09:20.889 11141.120 - 11191.532: 21.9858% ( 9) 00:09:20.889 11191.532 - 11241.945: 22.0634% ( 7) 00:09:20.889 11241.945 - 11292.357: 22.1410% ( 7) 00:09:20.889 11292.357 - 11342.769: 22.2185% ( 7) 00:09:20.889 11342.769 - 11393.182: 22.2850% ( 6) 00:09:20.889 11393.182 - 11443.594: 22.3515% ( 6) 00:09:20.889 11443.594 - 11494.006: 22.4291% ( 7) 00:09:20.889 11494.006 - 11544.418: 22.4734% ( 4) 00:09:20.889 11544.418 - 11594.831: 22.5621% ( 8) 00:09:20.889 11594.831 - 11645.243: 22.6285% ( 6) 00:09:20.889 11645.243 - 11695.655: 22.7283% ( 9) 00:09:20.889 11695.655 - 11746.068: 22.8391% ( 10) 00:09:20.889 11746.068 - 11796.480: 22.9610% ( 11) 00:09:20.889 11796.480 - 11846.892: 23.0718% ( 10) 00:09:20.889 11846.892 - 11897.305: 23.1605% ( 8) 00:09:20.889 11897.305 - 11947.717: 23.2602% ( 9) 00:09:20.889 11947.717 - 11998.129: 23.3821% ( 11) 00:09:20.889 11998.129 - 12048.542: 23.5040% ( 11) 00:09:20.889 12048.542 - 12098.954: 23.6148% ( 10) 00:09:20.889 12098.954 - 12149.366: 23.7256% ( 10) 00:09:20.889 12149.366 - 12199.778: 23.8586% ( 12) 00:09:20.889 12199.778 - 12250.191: 23.9916% ( 12) 00:09:20.889 12250.191 - 12300.603: 24.1689% ( 16) 00:09:20.889 12300.603 - 12351.015: 24.3905% ( 20) 00:09:20.889 12351.015 - 12401.428: 24.6011% ( 19) 00:09:20.889 12401.428 - 12451.840: 24.9003% ( 27) 00:09:20.889 12451.840 - 12502.252: 25.1441% ( 22) 00:09:20.889 12502.252 - 12552.665: 25.3879% ( 22) 00:09:20.889 12552.665 - 12603.077: 25.6538% ( 24) 00:09:20.889 12603.077 - 12653.489: 25.9530% ( 27) 00:09:20.889 12653.489 - 12703.902: 26.2522% ( 27) 00:09:20.889 12703.902 - 12754.314: 26.5071% ( 23) 00:09:20.889 12754.314 - 12804.726: 26.7509% ( 22) 00:09:20.889 12804.726 - 12855.138: 27.0168% ( 24) 00:09:20.889 12855.138 - 12905.551: 27.2606% ( 22) 00:09:20.889 12905.551 - 13006.375: 27.7593% ( 45) 00:09:20.889 13006.375 - 13107.200: 28.4907% ( 66) 00:09:20.889 13107.200 - 13208.025: 29.3551% ( 78) 00:09:20.889 13208.025 - 13308.849: 30.1973% ( 76) 00:09:20.889 13308.849 - 13409.674: 30.9286% ( 66) 00:09:20.889 13409.674 - 13510.498: 31.7598% ( 75) 00:09:20.889 13510.498 - 13611.323: 32.6130% ( 77) 00:09:20.889 13611.323 - 13712.148: 33.4663% ( 77) 00:09:20.889 13712.148 - 13812.972: 34.3750% ( 82) 00:09:20.889 13812.972 - 13913.797: 35.2615% ( 80) 00:09:20.889 13913.797 - 14014.622: 36.0151% ( 68) 00:09:20.889 14014.622 - 14115.446: 36.7354% ( 65) 00:09:20.889 14115.446 - 14216.271: 37.5000% ( 69) 00:09:20.889 14216.271 - 14317.095: 38.2646% ( 69) 00:09:20.889 14317.095 - 14417.920: 39.1179% ( 77) 00:09:20.889 14417.920 - 14518.745: 39.9934% ( 79) 00:09:20.889 14518.745 - 14619.569: 40.8245% ( 75) 00:09:20.889 14619.569 - 14720.394: 41.7110% ( 80) 00:09:20.889 14720.394 - 14821.218: 42.8302% ( 101) 00:09:20.889 14821.218 - 14922.043: 43.9384% ( 100) 00:09:20.889 14922.043 - 15022.868: 45.1020% ( 105) 00:09:20.889 15022.868 - 15123.692: 46.1547% ( 95) 00:09:20.889 15123.692 - 15224.517: 47.2518% ( 99) 00:09:20.889 15224.517 - 15325.342: 48.4818% ( 111) 00:09:20.889 15325.342 - 15426.166: 49.8559% ( 124) 00:09:20.889 15426.166 - 15526.991: 51.3852% ( 138) 00:09:20.889 15526.991 - 15627.815: 52.7482% ( 123) 00:09:20.889 15627.815 - 15728.640: 54.3772% ( 147) 00:09:20.889 15728.640 - 15829.465: 55.9286% ( 140) 00:09:20.889 15829.465 - 15930.289: 57.5687% ( 148) 00:09:20.889 15930.289 - 16031.114: 59.3085% ( 157) 00:09:20.889 16031.114 - 16131.938: 60.9929% ( 152) 00:09:20.889 16131.938 - 16232.763: 62.6108% ( 146) 00:09:20.889 16232.763 - 16333.588: 64.2730% ( 150) 00:09:20.889 16333.588 - 16434.412: 66.0239% ( 158) 00:09:20.889 16434.412 - 16535.237: 67.6640% ( 148) 00:09:20.889 16535.237 - 16636.062: 69.6587% ( 180) 00:09:20.889 16636.062 - 16736.886: 71.7088% ( 185) 00:09:20.889 16736.886 - 16837.711: 73.5705% ( 168) 00:09:20.889 16837.711 - 16938.535: 75.1551% ( 143) 00:09:20.889 16938.535 - 17039.360: 76.5736% ( 128) 00:09:20.889 17039.360 - 17140.185: 78.0807% ( 136) 00:09:20.889 17140.185 - 17241.009: 79.4105% ( 120) 00:09:20.889 17241.009 - 17341.834: 80.6738% ( 114) 00:09:20.889 17341.834 - 17442.658: 81.8595% ( 107) 00:09:20.889 17442.658 - 17543.483: 83.1560% ( 117) 00:09:20.889 17543.483 - 17644.308: 84.2088% ( 95) 00:09:20.889 17644.308 - 17745.132: 85.0953% ( 80) 00:09:20.889 17745.132 - 17845.957: 85.7602% ( 60) 00:09:20.890 17845.957 - 17946.782: 86.3586% ( 54) 00:09:20.890 17946.782 - 18047.606: 87.0235% ( 60) 00:09:20.890 18047.606 - 18148.431: 87.6662% ( 58) 00:09:20.890 18148.431 - 18249.255: 88.3422% ( 61) 00:09:20.890 18249.255 - 18350.080: 89.0293% ( 62) 00:09:20.890 18350.080 - 18450.905: 89.6720% ( 58) 00:09:20.890 18450.905 - 18551.729: 90.2704% ( 54) 00:09:20.890 18551.729 - 18652.554: 90.8799% ( 55) 00:09:20.890 18652.554 - 18753.378: 91.2899% ( 37) 00:09:20.890 18753.378 - 18854.203: 91.8883% ( 54) 00:09:20.890 18854.203 - 18955.028: 92.4756% ( 53) 00:09:20.890 18955.028 - 19055.852: 93.0851% ( 55) 00:09:20.890 19055.852 - 19156.677: 93.6392% ( 50) 00:09:20.890 19156.677 - 19257.502: 94.2154% ( 52) 00:09:20.890 19257.502 - 19358.326: 94.9025% ( 62) 00:09:20.890 19358.326 - 19459.151: 95.4787% ( 52) 00:09:20.890 19459.151 - 19559.975: 96.0217% ( 49) 00:09:20.890 19559.975 - 19660.800: 96.5426% ( 47) 00:09:20.890 19660.800 - 19761.625: 96.8861% ( 31) 00:09:20.890 19761.625 - 19862.449: 97.2296% ( 31) 00:09:20.890 19862.449 - 19963.274: 97.5731% ( 31) 00:09:20.890 19963.274 - 20064.098: 97.9056% ( 30) 00:09:20.890 20064.098 - 20164.923: 98.1826% ( 25) 00:09:20.890 20164.923 - 20265.748: 98.3932% ( 19) 00:09:20.890 20265.748 - 20366.572: 98.5151% ( 11) 00:09:20.890 20366.572 - 20467.397: 98.5705% ( 5) 00:09:20.890 20467.397 - 20568.222: 98.5816% ( 1) 00:09:20.890 22786.363 - 22887.188: 98.5926% ( 1) 00:09:20.890 22887.188 - 22988.012: 98.6259% ( 3) 00:09:20.890 22988.012 - 23088.837: 98.6591% ( 3) 00:09:20.890 23088.837 - 23189.662: 98.6924% ( 3) 00:09:20.890 23189.662 - 23290.486: 98.7367% ( 4) 00:09:20.890 23290.486 - 23391.311: 98.7810% ( 4) 00:09:20.890 23391.311 - 23492.135: 98.8254% ( 4) 00:09:20.890 23492.135 - 23592.960: 98.8697% ( 4) 00:09:20.890 23592.960 - 23693.785: 98.9251% ( 5) 00:09:20.890 23693.785 - 23794.609: 98.9694% ( 4) 00:09:20.890 23794.609 - 23895.434: 99.0137% ( 4) 00:09:20.890 23895.434 - 23996.258: 99.0581% ( 4) 00:09:20.890 23996.258 - 24097.083: 99.1024% ( 4) 00:09:20.890 24097.083 - 24197.908: 99.1467% ( 4) 00:09:20.890 24197.908 - 24298.732: 99.1910% ( 4) 00:09:20.890 24298.732 - 24399.557: 99.2465% ( 5) 00:09:20.890 24399.557 - 24500.382: 99.2797% ( 3) 00:09:20.890 24500.382 - 24601.206: 99.2908% ( 1) 00:09:20.890 31255.631 - 31457.280: 99.3240% ( 3) 00:09:20.890 31457.280 - 31658.929: 99.4459% ( 11) 00:09:20.890 31658.929 - 31860.578: 99.5900% ( 13) 00:09:20.890 31860.578 - 32062.228: 99.7340% ( 13) 00:09:20.890 32062.228 - 32263.877: 99.8892% ( 14) 00:09:20.890 32263.877 - 32465.526: 100.0000% ( 10) 00:09:20.890 00:09:20.890 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:20.890 ============================================================================== 00:09:20.890 Range in us Cumulative IO count 00:09:20.890 5620.972 - 5646.178: 0.0332% ( 3) 00:09:20.890 5646.178 - 5671.385: 0.1108% ( 7) 00:09:20.890 5671.385 - 5696.591: 0.2660% ( 14) 00:09:20.890 5696.591 - 5721.797: 0.4654% ( 18) 00:09:20.890 5721.797 - 5747.003: 0.6538% ( 17) 00:09:20.890 5747.003 - 5772.209: 0.8200% ( 15) 00:09:20.890 5772.209 - 5797.415: 1.1857% ( 33) 00:09:20.890 5797.415 - 5822.622: 1.5957% ( 37) 00:09:20.890 5822.622 - 5847.828: 1.9060% ( 28) 00:09:20.890 5847.828 - 5873.034: 2.2052% ( 27) 00:09:20.890 5873.034 - 5898.240: 2.5044% ( 27) 00:09:20.890 5898.240 - 5923.446: 2.8369% ( 30) 00:09:20.890 5923.446 - 5948.652: 3.2912% ( 41) 00:09:20.890 5948.652 - 5973.858: 3.7566% ( 42) 00:09:20.890 5973.858 - 5999.065: 4.2442% ( 44) 00:09:20.890 5999.065 - 6024.271: 4.6432% ( 36) 00:09:20.890 6024.271 - 6049.477: 5.0864% ( 40) 00:09:20.890 6049.477 - 6074.683: 5.4854% ( 36) 00:09:20.890 6074.683 - 6099.889: 5.8843% ( 36) 00:09:20.890 6099.889 - 6125.095: 6.3054% ( 38) 00:09:20.890 6125.095 - 6150.302: 6.7265% ( 38) 00:09:20.890 6150.302 - 6175.508: 7.0922% ( 33) 00:09:20.890 6175.508 - 6200.714: 7.5133% ( 38) 00:09:20.890 6200.714 - 6225.920: 7.9566% ( 40) 00:09:20.890 6225.920 - 6251.126: 8.3998% ( 40) 00:09:20.890 6251.126 - 6276.332: 8.8320% ( 39) 00:09:20.890 6276.332 - 6301.538: 9.2863% ( 41) 00:09:20.890 6301.538 - 6326.745: 9.7629% ( 43) 00:09:20.890 6326.745 - 6351.951: 10.2504% ( 44) 00:09:20.890 6351.951 - 6377.157: 10.7380% ( 44) 00:09:20.890 6377.157 - 6402.363: 11.2145% ( 43) 00:09:20.890 6402.363 - 6427.569: 11.7132% ( 45) 00:09:20.890 6427.569 - 6452.775: 12.1786% ( 42) 00:09:20.890 6452.775 - 6503.188: 13.0652% ( 80) 00:09:20.890 6503.188 - 6553.600: 13.7633% ( 63) 00:09:20.890 6553.600 - 6604.012: 14.3174% ( 50) 00:09:20.890 6604.012 - 6654.425: 14.7606% ( 40) 00:09:20.890 6654.425 - 6704.837: 15.0931% ( 30) 00:09:20.890 6704.837 - 6755.249: 15.3923% ( 27) 00:09:20.890 6755.249 - 6805.662: 15.5807% ( 17) 00:09:20.890 6805.662 - 6856.074: 15.7580% ( 16) 00:09:20.890 6856.074 - 6906.486: 15.9574% ( 18) 00:09:20.890 6906.486 - 6956.898: 16.1348% ( 16) 00:09:20.890 6956.898 - 7007.311: 16.3231% ( 17) 00:09:20.890 7007.311 - 7057.723: 16.4672% ( 13) 00:09:20.890 7057.723 - 7108.135: 16.6002% ( 12) 00:09:20.890 7108.135 - 7158.548: 16.7110% ( 10) 00:09:20.890 7158.548 - 7208.960: 16.7664% ( 5) 00:09:20.890 7208.960 - 7259.372: 16.7996% ( 3) 00:09:20.890 7259.372 - 7309.785: 16.8440% ( 4) 00:09:20.890 7309.785 - 7360.197: 16.8883% ( 4) 00:09:20.890 7360.197 - 7410.609: 16.9215% ( 3) 00:09:20.890 7410.609 - 7461.022: 16.9548% ( 3) 00:09:20.890 7461.022 - 7511.434: 16.9991% ( 4) 00:09:20.890 7511.434 - 7561.846: 17.0213% ( 2) 00:09:20.890 7864.320 - 7914.732: 17.0434% ( 2) 00:09:20.890 7914.732 - 7965.145: 17.0767% ( 3) 00:09:20.890 7965.145 - 8015.557: 17.1210% ( 4) 00:09:20.890 8015.557 - 8065.969: 17.1653% ( 4) 00:09:20.890 8065.969 - 8116.382: 17.1986% ( 3) 00:09:20.890 8116.382 - 8166.794: 17.2429% ( 4) 00:09:20.890 8166.794 - 8217.206: 17.2872% ( 4) 00:09:20.890 8217.206 - 8267.618: 17.3205% ( 3) 00:09:20.890 8267.618 - 8318.031: 17.3648% ( 4) 00:09:20.890 8318.031 - 8368.443: 17.4202% ( 5) 00:09:20.890 8368.443 - 8418.855: 17.4978% ( 7) 00:09:20.890 8418.855 - 8469.268: 17.5754% ( 7) 00:09:20.890 8469.268 - 8519.680: 17.6418% ( 6) 00:09:20.890 8519.680 - 8570.092: 17.7194% ( 7) 00:09:20.890 8570.092 - 8620.505: 17.7859% ( 6) 00:09:20.890 8620.505 - 8670.917: 17.8524% ( 6) 00:09:20.890 8670.917 - 8721.329: 17.9300% ( 7) 00:09:20.890 8721.329 - 8771.742: 17.9965% ( 6) 00:09:20.890 8771.742 - 8822.154: 18.0519% ( 5) 00:09:20.890 8822.154 - 8872.566: 18.0851% ( 3) 00:09:20.890 8872.566 - 8922.978: 18.1405% ( 5) 00:09:20.890 8922.978 - 8973.391: 18.2181% ( 7) 00:09:20.890 8973.391 - 9023.803: 18.3067% ( 8) 00:09:20.890 9023.803 - 9074.215: 18.3732% ( 6) 00:09:20.890 9074.215 - 9124.628: 18.4508% ( 7) 00:09:20.890 9124.628 - 9175.040: 18.5284% ( 7) 00:09:20.890 9175.040 - 9225.452: 18.5949% ( 6) 00:09:20.890 9225.452 - 9275.865: 18.6613% ( 6) 00:09:20.890 9275.865 - 9326.277: 18.8165% ( 14) 00:09:20.890 9326.277 - 9376.689: 18.9273% ( 10) 00:09:20.890 9376.689 - 9427.102: 19.1157% ( 17) 00:09:20.890 9427.102 - 9477.514: 19.2819% ( 15) 00:09:20.890 9477.514 - 9527.926: 19.4371% ( 14) 00:09:20.890 9527.926 - 9578.338: 19.6033% ( 15) 00:09:20.890 9578.338 - 9628.751: 19.7695% ( 15) 00:09:20.890 9628.751 - 9679.163: 19.9246% ( 14) 00:09:20.890 9679.163 - 9729.575: 20.0798% ( 14) 00:09:20.890 9729.575 - 9779.988: 20.2017% ( 11) 00:09:20.890 9779.988 - 9830.400: 20.3125% ( 10) 00:09:20.890 9830.400 - 9880.812: 20.4233% ( 10) 00:09:20.890 9880.812 - 9931.225: 20.5341% ( 10) 00:09:20.890 9931.225 - 9981.637: 20.6671% ( 12) 00:09:20.890 9981.637 - 10032.049: 20.7890% ( 11) 00:09:20.890 10032.049 - 10082.462: 20.9220% ( 12) 00:09:20.890 10082.462 - 10132.874: 21.0439% ( 11) 00:09:20.890 10132.874 - 10183.286: 21.1104% ( 6) 00:09:20.890 10183.286 - 10233.698: 21.1990% ( 8) 00:09:20.890 10233.698 - 10284.111: 21.2323% ( 3) 00:09:20.890 10284.111 - 10334.523: 21.2655% ( 3) 00:09:20.890 10334.523 - 10384.935: 21.2766% ( 1) 00:09:20.890 10435.348 - 10485.760: 21.3542% ( 7) 00:09:20.890 10485.760 - 10536.172: 21.3652% ( 1) 00:09:20.890 10536.172 - 10586.585: 21.3985% ( 3) 00:09:20.890 10586.585 - 10636.997: 21.4317% ( 3) 00:09:20.890 10636.997 - 10687.409: 21.4761% ( 4) 00:09:20.890 10687.409 - 10737.822: 21.5093% ( 3) 00:09:20.890 10737.822 - 10788.234: 21.5426% ( 3) 00:09:20.890 10788.234 - 10838.646: 21.5758% ( 3) 00:09:20.890 10838.646 - 10889.058: 21.6090% ( 3) 00:09:20.890 10889.058 - 10939.471: 21.6423% ( 3) 00:09:20.890 10939.471 - 10989.883: 21.6977% ( 5) 00:09:20.890 10989.883 - 11040.295: 21.7642% ( 6) 00:09:20.890 11040.295 - 11090.708: 21.8307% ( 6) 00:09:20.890 11090.708 - 11141.120: 21.8972% ( 6) 00:09:20.890 11141.120 - 11191.532: 21.9747% ( 7) 00:09:20.890 11191.532 - 11241.945: 22.0412% ( 6) 00:09:20.890 11241.945 - 11292.357: 22.1077% ( 6) 00:09:20.890 11292.357 - 11342.769: 22.1853% ( 7) 00:09:20.890 11342.769 - 11393.182: 22.2739% ( 8) 00:09:20.890 11393.182 - 11443.594: 22.3515% ( 7) 00:09:20.890 11443.594 - 11494.006: 22.3958% ( 4) 00:09:20.890 11494.006 - 11544.418: 22.4402% ( 4) 00:09:20.890 11544.418 - 11594.831: 22.4845% ( 4) 00:09:20.890 11594.831 - 11645.243: 22.5288% ( 4) 00:09:20.890 11645.243 - 11695.655: 22.5731% ( 4) 00:09:20.890 11695.655 - 11746.068: 22.6175% ( 4) 00:09:20.890 11746.068 - 11796.480: 22.6618% ( 4) 00:09:20.890 11796.480 - 11846.892: 22.7172% ( 5) 00:09:20.890 11846.892 - 11897.305: 22.7726% ( 5) 00:09:20.891 11897.305 - 11947.717: 22.8391% ( 6) 00:09:20.891 11947.717 - 11998.129: 22.8945% ( 5) 00:09:20.891 11998.129 - 12048.542: 22.9721% ( 7) 00:09:20.891 12048.542 - 12098.954: 23.0718% ( 9) 00:09:20.891 12098.954 - 12149.366: 23.1937% ( 11) 00:09:20.891 12149.366 - 12199.778: 23.2934% ( 9) 00:09:20.891 12199.778 - 12250.191: 23.3932% ( 9) 00:09:20.891 12250.191 - 12300.603: 23.5372% ( 13) 00:09:20.891 12300.603 - 12351.015: 23.7145% ( 16) 00:09:20.891 12351.015 - 12401.428: 23.8808% ( 15) 00:09:20.891 12401.428 - 12451.840: 24.0581% ( 16) 00:09:20.891 12451.840 - 12502.252: 24.3129% ( 23) 00:09:20.891 12502.252 - 12552.665: 24.6011% ( 26) 00:09:20.891 12552.665 - 12603.077: 24.8338% ( 21) 00:09:20.891 12603.077 - 12653.489: 25.0776% ( 22) 00:09:20.891 12653.489 - 12703.902: 25.3768% ( 27) 00:09:20.891 12703.902 - 12754.314: 25.6981% ( 29) 00:09:20.891 12754.314 - 12804.726: 26.0417% ( 31) 00:09:20.891 12804.726 - 12855.138: 26.3630% ( 29) 00:09:20.891 12855.138 - 12905.551: 26.7287% ( 33) 00:09:20.891 12905.551 - 13006.375: 27.5931% ( 78) 00:09:20.891 13006.375 - 13107.200: 28.4131% ( 74) 00:09:20.891 13107.200 - 13208.025: 29.2886% ( 79) 00:09:20.891 13208.025 - 13308.849: 30.1529% ( 78) 00:09:20.891 13308.849 - 13409.674: 30.9619% ( 73) 00:09:20.891 13409.674 - 13510.498: 31.6711% ( 64) 00:09:20.891 13510.498 - 13611.323: 32.4357% ( 69) 00:09:20.891 13611.323 - 13712.148: 33.1228% ( 62) 00:09:20.891 13712.148 - 13812.972: 33.7655% ( 58) 00:09:20.891 13812.972 - 13913.797: 34.4082% ( 58) 00:09:20.891 13913.797 - 14014.622: 35.1175% ( 64) 00:09:20.891 14014.622 - 14115.446: 35.7934% ( 61) 00:09:20.891 14115.446 - 14216.271: 36.4916% ( 63) 00:09:20.891 14216.271 - 14317.095: 37.1786% ( 62) 00:09:20.891 14317.095 - 14417.920: 38.0873% ( 82) 00:09:20.891 14417.920 - 14518.745: 38.9628% ( 79) 00:09:20.891 14518.745 - 14619.569: 39.9158% ( 86) 00:09:20.891 14619.569 - 14720.394: 40.7137% ( 72) 00:09:20.891 14720.394 - 14821.218: 41.5448% ( 75) 00:09:20.891 14821.218 - 14922.043: 42.4202% ( 79) 00:09:20.891 14922.043 - 15022.868: 43.5505% ( 102) 00:09:20.891 15022.868 - 15123.692: 44.7917% ( 112) 00:09:20.891 15123.692 - 15224.517: 46.0328% ( 112) 00:09:20.891 15224.517 - 15325.342: 47.3404% ( 118) 00:09:20.891 15325.342 - 15426.166: 48.6702% ( 120) 00:09:20.891 15426.166 - 15526.991: 50.2549% ( 143) 00:09:20.891 15526.991 - 15627.815: 51.9947% ( 157) 00:09:20.891 15627.815 - 15728.640: 53.8231% ( 165) 00:09:20.891 15728.640 - 15829.465: 55.7513% ( 174) 00:09:20.891 15829.465 - 15930.289: 57.6906% ( 175) 00:09:20.891 15930.289 - 16031.114: 59.5966% ( 172) 00:09:20.891 16031.114 - 16131.938: 61.3808% ( 161) 00:09:20.891 16131.938 - 16232.763: 63.4530% ( 187) 00:09:20.891 16232.763 - 16333.588: 65.3036% ( 167) 00:09:20.891 16333.588 - 16434.412: 67.2651% ( 177) 00:09:20.891 16434.412 - 16535.237: 69.0935% ( 165) 00:09:20.891 16535.237 - 16636.062: 70.8666% ( 160) 00:09:20.891 16636.062 - 16736.886: 72.5953% ( 156) 00:09:20.891 16736.886 - 16837.711: 74.1910% ( 144) 00:09:20.891 16837.711 - 16938.535: 75.6871% ( 135) 00:09:20.891 16938.535 - 17039.360: 77.0390% ( 122) 00:09:20.891 17039.360 - 17140.185: 78.4796% ( 130) 00:09:20.891 17140.185 - 17241.009: 79.6764% ( 108) 00:09:20.891 17241.009 - 17341.834: 80.8178% ( 103) 00:09:20.891 17341.834 - 17442.658: 81.9592% ( 103) 00:09:20.891 17442.658 - 17543.483: 83.0785% ( 101) 00:09:20.891 17543.483 - 17644.308: 84.1755% ( 99) 00:09:20.891 17644.308 - 17745.132: 85.0621% ( 80) 00:09:20.891 17745.132 - 17845.957: 85.8932% ( 75) 00:09:20.891 17845.957 - 17946.782: 86.7132% ( 74) 00:09:20.891 17946.782 - 18047.606: 87.7216% ( 91) 00:09:20.891 18047.606 - 18148.431: 88.5749% ( 77) 00:09:20.891 18148.431 - 18249.255: 89.3506% ( 70) 00:09:20.891 18249.255 - 18350.080: 90.1596% ( 73) 00:09:20.891 18350.080 - 18450.905: 91.0129% ( 77) 00:09:20.891 18450.905 - 18551.729: 91.6556% ( 58) 00:09:20.891 18551.729 - 18652.554: 92.3426% ( 62) 00:09:20.891 18652.554 - 18753.378: 92.9965% ( 59) 00:09:20.891 18753.378 - 18854.203: 93.6170% ( 56) 00:09:20.891 18854.203 - 18955.028: 94.1711% ( 50) 00:09:20.891 18955.028 - 19055.852: 94.6809% ( 46) 00:09:20.891 19055.852 - 19156.677: 95.2238% ( 49) 00:09:20.891 19156.677 - 19257.502: 95.7890% ( 51) 00:09:20.891 19257.502 - 19358.326: 96.2101% ( 38) 00:09:20.891 19358.326 - 19459.151: 96.5980% ( 35) 00:09:20.891 19459.151 - 19559.975: 96.8750% ( 25) 00:09:20.891 19559.975 - 19660.800: 97.1853% ( 28) 00:09:20.891 19660.800 - 19761.625: 97.4291% ( 22) 00:09:20.891 19761.625 - 19862.449: 97.6396% ( 19) 00:09:20.891 19862.449 - 19963.274: 97.8834% ( 22) 00:09:20.891 19963.274 - 20064.098: 98.0164% ( 12) 00:09:20.891 20064.098 - 20164.923: 98.1272% ( 10) 00:09:20.891 20164.923 - 20265.748: 98.2270% ( 9) 00:09:20.891 20265.748 - 20366.572: 98.2934% ( 6) 00:09:20.891 20366.572 - 20467.397: 98.3710% ( 7) 00:09:20.891 20467.397 - 20568.222: 98.4486% ( 7) 00:09:20.891 20568.222 - 20669.046: 98.5040% ( 5) 00:09:20.891 20669.046 - 20769.871: 98.5483% ( 4) 00:09:20.891 20769.871 - 20870.695: 98.5816% ( 3) 00:09:20.891 22786.363 - 22887.188: 98.5926% ( 1) 00:09:20.891 22887.188 - 22988.012: 98.6370% ( 4) 00:09:20.891 22988.012 - 23088.837: 98.6813% ( 4) 00:09:20.891 23088.837 - 23189.662: 98.7256% ( 4) 00:09:20.891 23189.662 - 23290.486: 98.7589% ( 3) 00:09:20.891 23290.486 - 23391.311: 98.7921% ( 3) 00:09:20.891 23391.311 - 23492.135: 98.8254% ( 3) 00:09:20.891 23492.135 - 23592.960: 98.8697% ( 4) 00:09:20.891 23592.960 - 23693.785: 98.9029% ( 3) 00:09:20.891 23693.785 - 23794.609: 98.9473% ( 4) 00:09:20.891 23794.609 - 23895.434: 98.9916% ( 4) 00:09:20.891 23895.434 - 23996.258: 99.0359% ( 4) 00:09:20.891 23996.258 - 24097.083: 99.0913% ( 5) 00:09:20.891 24097.083 - 24197.908: 99.1356% ( 4) 00:09:20.891 24197.908 - 24298.732: 99.1800% ( 4) 00:09:20.891 24298.732 - 24399.557: 99.2354% ( 5) 00:09:20.891 24399.557 - 24500.382: 99.2575% ( 2) 00:09:20.891 24500.382 - 24601.206: 99.2908% ( 3) 00:09:20.891 31053.982 - 31255.631: 99.4016% ( 10) 00:09:20.891 31255.631 - 31457.280: 99.5346% ( 12) 00:09:20.891 31457.280 - 31658.929: 99.6786% ( 13) 00:09:20.891 31658.929 - 31860.578: 99.8005% ( 11) 00:09:20.891 31860.578 - 32062.228: 99.9446% ( 13) 00:09:20.891 32062.228 - 32263.877: 100.0000% ( 5) 00:09:20.891 00:09:20.891 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:20.891 ============================================================================== 00:09:20.891 Range in us Cumulative IO count 00:09:20.891 4587.520 - 4612.726: 0.0110% ( 1) 00:09:20.891 4612.726 - 4637.932: 0.0330% ( 2) 00:09:20.891 4637.932 - 4663.138: 0.0550% ( 2) 00:09:20.891 4663.138 - 4688.345: 0.0770% ( 2) 00:09:20.891 4688.345 - 4713.551: 0.0990% ( 2) 00:09:20.891 4713.551 - 4738.757: 0.1100% ( 1) 00:09:20.891 4738.757 - 4763.963: 0.1320% ( 2) 00:09:20.891 4763.963 - 4789.169: 0.1540% ( 2) 00:09:20.891 4789.169 - 4814.375: 0.1651% ( 1) 00:09:20.891 4814.375 - 4839.582: 0.1871% ( 2) 00:09:20.891 4839.582 - 4864.788: 0.1981% ( 1) 00:09:20.891 4864.788 - 4889.994: 0.2201% ( 2) 00:09:20.891 4889.994 - 4915.200: 0.2421% ( 2) 00:09:20.891 4915.200 - 4940.406: 0.2531% ( 1) 00:09:20.891 4940.406 - 4965.612: 0.2751% ( 2) 00:09:20.891 4965.612 - 4990.818: 0.2861% ( 1) 00:09:20.891 4990.818 - 5016.025: 0.3081% ( 2) 00:09:20.891 5016.025 - 5041.231: 0.3301% ( 2) 00:09:20.891 5041.231 - 5066.437: 0.3411% ( 1) 00:09:20.891 5066.437 - 5091.643: 0.3631% ( 2) 00:09:20.891 5091.643 - 5116.849: 0.3741% ( 1) 00:09:20.891 5116.849 - 5142.055: 0.3851% ( 1) 00:09:20.891 5167.262 - 5192.468: 0.4071% ( 2) 00:09:20.891 5192.468 - 5217.674: 0.4291% ( 2) 00:09:20.891 5217.674 - 5242.880: 0.4401% ( 1) 00:09:20.891 5242.880 - 5268.086: 0.4621% ( 2) 00:09:20.891 5268.086 - 5293.292: 0.4842% ( 2) 00:09:20.891 5293.292 - 5318.498: 0.4952% ( 1) 00:09:20.891 5318.498 - 5343.705: 0.5172% ( 2) 00:09:20.891 5343.705 - 5368.911: 0.5282% ( 1) 00:09:20.891 5368.911 - 5394.117: 0.5502% ( 2) 00:09:20.891 5394.117 - 5419.323: 0.5612% ( 1) 00:09:20.891 5419.323 - 5444.529: 0.5832% ( 2) 00:09:20.891 5444.529 - 5469.735: 0.5942% ( 1) 00:09:20.891 5469.735 - 5494.942: 0.6052% ( 1) 00:09:20.891 5520.148 - 5545.354: 0.6162% ( 1) 00:09:20.891 5545.354 - 5570.560: 0.6382% ( 2) 00:09:20.891 5570.560 - 5595.766: 0.6492% ( 1) 00:09:20.891 5595.766 - 5620.972: 0.6602% ( 1) 00:09:20.891 5620.972 - 5646.178: 0.6932% ( 3) 00:09:20.891 5646.178 - 5671.385: 0.8143% ( 11) 00:09:20.891 5671.385 - 5696.591: 1.0123% ( 18) 00:09:20.891 5696.591 - 5721.797: 1.1994% ( 17) 00:09:20.891 5721.797 - 5747.003: 1.3974% ( 18) 00:09:20.891 5747.003 - 5772.209: 1.6395% ( 22) 00:09:20.891 5772.209 - 5797.415: 1.9366% ( 27) 00:09:20.891 5797.415 - 5822.622: 2.2447% ( 28) 00:09:20.891 5822.622 - 5847.828: 2.5198% ( 25) 00:09:20.891 5847.828 - 5873.034: 2.8389% ( 29) 00:09:20.891 5873.034 - 5898.240: 3.2790% ( 40) 00:09:20.891 5898.240 - 5923.446: 3.7082% ( 39) 00:09:20.891 5923.446 - 5948.652: 4.0933% ( 35) 00:09:20.891 5948.652 - 5973.858: 4.4784% ( 35) 00:09:20.891 5973.858 - 5999.065: 4.8636% ( 35) 00:09:20.891 5999.065 - 6024.271: 5.2377% ( 34) 00:09:20.891 6024.271 - 6049.477: 5.6008% ( 33) 00:09:20.891 6049.477 - 6074.683: 5.9969% ( 36) 00:09:20.891 6074.683 - 6099.889: 6.3820% ( 35) 00:09:20.891 6099.889 - 6125.095: 6.7342% ( 32) 00:09:20.891 6125.095 - 6150.302: 7.1413% ( 37) 00:09:20.891 6150.302 - 6175.508: 7.5814% ( 40) 00:09:20.891 6175.508 - 6200.714: 7.9886% ( 37) 00:09:20.891 6200.714 - 6225.920: 8.3957% ( 37) 00:09:20.892 6225.920 - 6251.126: 8.7918% ( 36) 00:09:20.892 6251.126 - 6276.332: 9.2210% ( 39) 00:09:20.892 6276.332 - 6301.538: 9.6501% ( 39) 00:09:20.892 6301.538 - 6326.745: 10.0902% ( 40) 00:09:20.892 6326.745 - 6351.951: 10.5084% ( 38) 00:09:20.892 6351.951 - 6377.157: 10.9045% ( 36) 00:09:20.892 6377.157 - 6402.363: 11.3006% ( 36) 00:09:20.892 6402.363 - 6427.569: 11.7188% ( 38) 00:09:20.892 6427.569 - 6452.775: 12.1699% ( 41) 00:09:20.892 6452.775 - 6503.188: 13.0062% ( 76) 00:09:20.892 6503.188 - 6553.600: 13.5563% ( 50) 00:09:20.892 6553.600 - 6604.012: 14.0295% ( 43) 00:09:20.892 6604.012 - 6654.425: 14.3596% ( 30) 00:09:20.892 6654.425 - 6704.837: 14.6127% ( 23) 00:09:20.892 6704.837 - 6755.249: 14.8438% ( 21) 00:09:20.892 6755.249 - 6805.662: 15.0528% ( 19) 00:09:20.892 6805.662 - 6856.074: 15.1959% ( 13) 00:09:20.892 6856.074 - 6906.486: 15.3609% ( 15) 00:09:20.892 6906.486 - 6956.898: 15.5370% ( 16) 00:09:20.892 6956.898 - 7007.311: 15.6800% ( 13) 00:09:20.892 7007.311 - 7057.723: 15.8561% ( 16) 00:09:20.892 7057.723 - 7108.135: 16.0101% ( 14) 00:09:20.892 7108.135 - 7158.548: 16.1202% ( 10) 00:09:20.892 7158.548 - 7208.960: 16.2302% ( 10) 00:09:20.892 7208.960 - 7259.372: 16.3622% ( 12) 00:09:20.892 7259.372 - 7309.785: 16.4833% ( 11) 00:09:20.892 7309.785 - 7360.197: 16.5823% ( 9) 00:09:20.892 7360.197 - 7410.609: 16.6593% ( 7) 00:09:20.892 7410.609 - 7461.022: 16.7364% ( 7) 00:09:20.892 7461.022 - 7511.434: 16.8134% ( 7) 00:09:20.892 7511.434 - 7561.846: 16.8574% ( 4) 00:09:20.892 7561.846 - 7612.258: 16.9014% ( 4) 00:09:20.892 8267.618 - 8318.031: 16.9234% ( 2) 00:09:20.892 8318.031 - 8368.443: 17.0114% ( 8) 00:09:20.892 8368.443 - 8418.855: 17.1325% ( 11) 00:09:20.892 8418.855 - 8469.268: 17.2425% ( 10) 00:09:20.892 8469.268 - 8519.680: 17.3746% ( 12) 00:09:20.892 8519.680 - 8570.092: 17.4956% ( 11) 00:09:20.892 8570.092 - 8620.505: 17.6056% ( 10) 00:09:20.892 8620.505 - 8670.917: 17.7927% ( 17) 00:09:20.892 8670.917 - 8721.329: 17.9247% ( 12) 00:09:20.892 8721.329 - 8771.742: 18.0568% ( 12) 00:09:20.892 8771.742 - 8822.154: 18.1668% ( 10) 00:09:20.892 8822.154 - 8872.566: 18.2879% ( 11) 00:09:20.892 8872.566 - 8922.978: 18.4309% ( 13) 00:09:20.892 8922.978 - 8973.391: 18.5629% ( 12) 00:09:20.892 8973.391 - 9023.803: 18.6950% ( 12) 00:09:20.892 9023.803 - 9074.215: 18.8270% ( 12) 00:09:20.892 9074.215 - 9124.628: 18.9811% ( 14) 00:09:20.892 9124.628 - 9175.040: 19.1241% ( 13) 00:09:20.892 9175.040 - 9225.452: 19.1681% ( 4) 00:09:20.892 9225.452 - 9275.865: 19.2452% ( 7) 00:09:20.892 9275.865 - 9326.277: 19.3332% ( 8) 00:09:20.892 9326.277 - 9376.689: 19.4322% ( 9) 00:09:20.892 9376.689 - 9427.102: 19.5423% ( 10) 00:09:20.892 9427.102 - 9477.514: 19.6303% ( 8) 00:09:20.892 9477.514 - 9527.926: 19.7293% ( 9) 00:09:20.892 9527.926 - 9578.338: 19.8173% ( 8) 00:09:20.892 9578.338 - 9628.751: 19.9054% ( 8) 00:09:20.892 9628.751 - 9679.163: 19.9824% ( 7) 00:09:20.892 9679.163 - 9729.575: 20.0594% ( 7) 00:09:20.892 9729.575 - 9779.988: 20.1364% ( 7) 00:09:20.892 9779.988 - 9830.400: 20.1805% ( 4) 00:09:20.892 9830.400 - 9880.812: 20.2245% ( 4) 00:09:20.892 9880.812 - 9931.225: 20.2575% ( 3) 00:09:20.892 9931.225 - 9981.637: 20.3235% ( 6) 00:09:20.892 9981.637 - 10032.049: 20.4005% ( 7) 00:09:20.892 10032.049 - 10082.462: 20.4776% ( 7) 00:09:20.892 10082.462 - 10132.874: 20.5436% ( 6) 00:09:20.892 10132.874 - 10183.286: 20.5766% ( 3) 00:09:20.892 10183.286 - 10233.698: 20.5986% ( 2) 00:09:20.892 10233.698 - 10284.111: 20.6426% ( 4) 00:09:20.892 10284.111 - 10334.523: 20.7086% ( 6) 00:09:20.892 10334.523 - 10384.935: 20.7746% ( 6) 00:09:20.892 10384.935 - 10435.348: 20.8517% ( 7) 00:09:20.892 10435.348 - 10485.760: 20.9397% ( 8) 00:09:20.892 10485.760 - 10536.172: 21.0167% ( 7) 00:09:20.892 10536.172 - 10586.585: 21.0827% ( 6) 00:09:20.892 10586.585 - 10636.997: 21.1598% ( 7) 00:09:20.892 10636.997 - 10687.409: 21.2258% ( 6) 00:09:20.892 10687.409 - 10737.822: 21.2918% ( 6) 00:09:20.892 10737.822 - 10788.234: 21.3688% ( 7) 00:09:20.892 10788.234 - 10838.646: 21.4459% ( 7) 00:09:20.892 10838.646 - 10889.058: 21.5229% ( 7) 00:09:20.892 10889.058 - 10939.471: 21.5889% ( 6) 00:09:20.892 10939.471 - 10989.883: 21.7320% ( 13) 00:09:20.892 10989.883 - 11040.295: 21.8530% ( 11) 00:09:20.892 11040.295 - 11090.708: 21.9300% ( 7) 00:09:20.892 11090.708 - 11141.120: 22.0070% ( 7) 00:09:20.892 11141.120 - 11191.532: 22.0621% ( 5) 00:09:20.892 11191.532 - 11241.945: 22.0951% ( 3) 00:09:20.892 11241.945 - 11292.357: 22.1391% ( 4) 00:09:20.892 11292.357 - 11342.769: 22.1831% ( 4) 00:09:20.892 11342.769 - 11393.182: 22.2271% ( 4) 00:09:20.892 11393.182 - 11443.594: 22.2711% ( 4) 00:09:20.892 11443.594 - 11494.006: 22.2931% ( 2) 00:09:20.892 11494.006 - 11544.418: 22.3371% ( 4) 00:09:20.892 11544.418 - 11594.831: 22.3812% ( 4) 00:09:20.892 11594.831 - 11645.243: 22.4252% ( 4) 00:09:20.892 11645.243 - 11695.655: 22.4582% ( 3) 00:09:20.892 11695.655 - 11746.068: 22.5022% ( 4) 00:09:20.892 11746.068 - 11796.480: 22.5352% ( 3) 00:09:20.892 11846.892 - 11897.305: 22.5682% ( 3) 00:09:20.892 11897.305 - 11947.717: 22.6122% ( 4) 00:09:20.892 11947.717 - 11998.129: 22.6893% ( 7) 00:09:20.892 11998.129 - 12048.542: 22.7773% ( 8) 00:09:20.892 12048.542 - 12098.954: 22.9864% ( 19) 00:09:20.892 12098.954 - 12149.366: 23.1624% ( 16) 00:09:20.892 12149.366 - 12199.778: 23.3275% ( 15) 00:09:20.892 12199.778 - 12250.191: 23.5915% ( 24) 00:09:20.892 12250.191 - 12300.603: 23.8006% ( 19) 00:09:20.892 12300.603 - 12351.015: 24.1417% ( 31) 00:09:20.892 12351.015 - 12401.428: 24.4168% ( 25) 00:09:20.892 12401.428 - 12451.840: 24.6699% ( 23) 00:09:20.892 12451.840 - 12502.252: 24.9560% ( 26) 00:09:20.892 12502.252 - 12552.665: 25.2751% ( 29) 00:09:20.892 12552.665 - 12603.077: 25.5942% ( 29) 00:09:20.892 12603.077 - 12653.489: 25.8913% ( 27) 00:09:20.892 12653.489 - 12703.902: 26.3754% ( 44) 00:09:20.892 12703.902 - 12754.314: 26.6835% ( 28) 00:09:20.892 12754.314 - 12804.726: 26.9586% ( 25) 00:09:20.892 12804.726 - 12855.138: 27.2337% ( 25) 00:09:20.892 12855.138 - 12905.551: 27.4978% ( 24) 00:09:20.892 12905.551 - 13006.375: 28.0370% ( 49) 00:09:20.892 13006.375 - 13107.200: 28.6202% ( 53) 00:09:20.892 13107.200 - 13208.025: 29.2033% ( 53) 00:09:20.892 13208.025 - 13308.849: 29.7865% ( 53) 00:09:20.892 13308.849 - 13409.674: 30.3697% ( 53) 00:09:20.892 13409.674 - 13510.498: 30.9529% ( 53) 00:09:20.892 13510.498 - 13611.323: 31.5801% ( 57) 00:09:20.892 13611.323 - 13712.148: 32.1523% ( 52) 00:09:20.892 13712.148 - 13812.972: 32.6364% ( 44) 00:09:20.892 13812.972 - 13913.797: 33.2306% ( 54) 00:09:20.892 13913.797 - 14014.622: 33.9129% ( 62) 00:09:20.892 14014.622 - 14115.446: 34.7051% ( 72) 00:09:20.892 14115.446 - 14216.271: 35.5524% ( 77) 00:09:20.892 14216.271 - 14317.095: 36.3116% ( 69) 00:09:20.892 14317.095 - 14417.920: 36.9278% ( 56) 00:09:20.892 14417.920 - 14518.745: 37.6430% ( 65) 00:09:20.892 14518.745 - 14619.569: 38.3913% ( 68) 00:09:20.892 14619.569 - 14720.394: 39.1065% ( 65) 00:09:20.892 14720.394 - 14821.218: 40.0088% ( 82) 00:09:20.892 14821.218 - 14922.043: 41.0211% ( 92) 00:09:20.892 14922.043 - 15022.868: 42.2755% ( 114) 00:09:20.892 15022.868 - 15123.692: 43.4639% ( 108) 00:09:20.892 15123.692 - 15224.517: 44.7733% ( 119) 00:09:20.892 15224.517 - 15325.342: 46.2038% ( 130) 00:09:20.892 15325.342 - 15426.166: 48.0744% ( 170) 00:09:20.892 15426.166 - 15526.991: 49.8570% ( 162) 00:09:20.892 15526.991 - 15627.815: 51.7826% ( 175) 00:09:20.892 15627.815 - 15728.640: 53.7962% ( 183) 00:09:20.892 15728.640 - 15829.465: 55.8099% ( 183) 00:09:20.892 15829.465 - 15930.289: 57.7355% ( 175) 00:09:20.892 15930.289 - 16031.114: 59.9472% ( 201) 00:09:20.892 16031.114 - 16131.938: 62.3349% ( 217) 00:09:20.892 16131.938 - 16232.763: 64.6457% ( 210) 00:09:20.892 16232.763 - 16333.588: 66.7474% ( 191) 00:09:20.892 16333.588 - 16434.412: 68.5629% ( 165) 00:09:20.892 16434.412 - 16535.237: 70.5436% ( 180) 00:09:20.892 16535.237 - 16636.062: 72.3922% ( 168) 00:09:20.892 16636.062 - 16736.886: 73.9877% ( 145) 00:09:20.892 16736.886 - 16837.711: 75.4732% ( 135) 00:09:20.892 16837.711 - 16938.535: 76.8156% ( 122) 00:09:20.892 16938.535 - 17039.360: 78.0920% ( 116) 00:09:20.892 17039.360 - 17140.185: 79.2474% ( 105) 00:09:20.892 17140.185 - 17241.009: 80.5568% ( 119) 00:09:20.892 17241.009 - 17341.834: 82.0092% ( 132) 00:09:20.892 17341.834 - 17442.658: 83.4947% ( 135) 00:09:20.892 17442.658 - 17543.483: 84.6501% ( 105) 00:09:20.892 17543.483 - 17644.308: 85.7504% ( 100) 00:09:20.892 17644.308 - 17745.132: 86.6527% ( 82) 00:09:20.892 17745.132 - 17845.957: 87.5550% ( 82) 00:09:20.892 17845.957 - 17946.782: 88.4683% ( 83) 00:09:20.892 17946.782 - 18047.606: 89.3596% ( 81) 00:09:20.892 18047.606 - 18148.431: 90.1518% ( 72) 00:09:20.892 18148.431 - 18249.255: 90.9551% ( 73) 00:09:20.892 18249.255 - 18350.080: 91.5933% ( 58) 00:09:20.892 18350.080 - 18450.905: 92.2645% ( 61) 00:09:20.892 18450.905 - 18551.729: 93.2108% ( 86) 00:09:20.892 18551.729 - 18652.554: 93.9481% ( 67) 00:09:20.892 18652.554 - 18753.378: 94.6303% ( 62) 00:09:20.892 18753.378 - 18854.203: 95.2465% ( 56) 00:09:20.892 18854.203 - 18955.028: 95.6866% ( 40) 00:09:20.892 18955.028 - 19055.852: 96.1708% ( 44) 00:09:20.892 19055.852 - 19156.677: 96.6439% ( 43) 00:09:20.892 19156.677 - 19257.502: 97.0621% ( 38) 00:09:20.892 19257.502 - 19358.326: 97.3261% ( 24) 00:09:20.893 19358.326 - 19459.151: 97.6232% ( 27) 00:09:20.893 19459.151 - 19559.975: 97.8873% ( 24) 00:09:20.893 19559.975 - 19660.800: 98.1514% ( 24) 00:09:20.893 19660.800 - 19761.625: 98.4595% ( 28) 00:09:20.893 19761.625 - 19862.449: 98.6686% ( 19) 00:09:20.893 19862.449 - 19963.274: 98.8116% ( 13) 00:09:20.893 19963.274 - 20064.098: 98.8886% ( 7) 00:09:20.893 20064.098 - 20164.923: 98.9657% ( 7) 00:09:20.893 20164.923 - 20265.748: 99.0427% ( 7) 00:09:20.893 20265.748 - 20366.572: 99.1087% ( 6) 00:09:20.893 20366.572 - 20467.397: 99.1857% ( 7) 00:09:20.893 20467.397 - 20568.222: 99.2518% ( 6) 00:09:20.893 20568.222 - 20669.046: 99.2958% ( 4) 00:09:20.893 23391.311 - 23492.135: 99.3178% ( 2) 00:09:20.893 23492.135 - 23592.960: 99.3618% ( 4) 00:09:20.893 23592.960 - 23693.785: 99.3948% ( 3) 00:09:20.893 23693.785 - 23794.609: 99.4388% ( 4) 00:09:20.893 23794.609 - 23895.434: 99.4718% ( 3) 00:09:20.893 23895.434 - 23996.258: 99.5048% ( 3) 00:09:20.893 23996.258 - 24097.083: 99.5489% ( 4) 00:09:20.893 24097.083 - 24197.908: 99.5819% ( 3) 00:09:20.893 24197.908 - 24298.732: 99.6259% ( 4) 00:09:20.893 24298.732 - 24399.557: 99.6699% ( 4) 00:09:20.893 24399.557 - 24500.382: 99.7139% ( 4) 00:09:20.893 24500.382 - 24601.206: 99.7469% ( 3) 00:09:20.893 24601.206 - 24702.031: 99.7909% ( 4) 00:09:20.893 24702.031 - 24802.855: 99.8349% ( 4) 00:09:20.893 24802.855 - 24903.680: 99.8790% ( 4) 00:09:20.893 24903.680 - 25004.505: 99.9230% ( 4) 00:09:20.893 25004.505 - 25105.329: 99.9670% ( 4) 00:09:20.893 25105.329 - 25206.154: 100.0000% ( 3) 00:09:20.893 00:09:20.893 11:04:49 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:22.310 Initializing NVMe Controllers 00:09:22.310 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.310 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.310 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.310 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.310 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:22.310 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:22.310 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:22.310 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:22.310 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:22.310 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:22.310 Initialization complete. Launching workers. 00:09:22.310 ======================================================== 00:09:22.310 Latency(us) 00:09:22.310 Device Information : IOPS MiB/s Average min max 00:09:22.310 PCIE (0000:00:13.0) NSID 1 from core 0: 10141.86 118.85 12631.60 8764.57 29070.90 00:09:22.310 PCIE (0000:00:10.0) NSID 1 from core 0: 10141.86 118.85 12624.28 8168.14 29032.66 00:09:22.310 PCIE (0000:00:11.0) NSID 1 from core 0: 10141.86 118.85 12613.98 7744.73 28553.74 00:09:22.310 PCIE (0000:00:12.0) NSID 1 from core 0: 10141.86 118.85 12603.59 6517.72 29461.00 00:09:22.310 PCIE (0000:00:12.0) NSID 2 from core 0: 10141.86 118.85 12592.99 5773.90 29491.25 00:09:22.310 PCIE (0000:00:12.0) NSID 3 from core 0: 10205.65 119.60 12503.88 4671.65 22457.72 00:09:22.310 ======================================================== 00:09:22.310 Total : 60914.96 713.85 12594.96 4671.65 29491.25 00:09:22.310 00:09:22.310 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:22.310 ================================================================================= 00:09:22.310 1.00000% : 9578.338us 00:09:22.310 10.00000% : 10636.997us 00:09:22.310 25.00000% : 11393.182us 00:09:22.310 50.00000% : 12401.428us 00:09:22.310 75.00000% : 13611.323us 00:09:22.310 90.00000% : 14518.745us 00:09:22.310 95.00000% : 15627.815us 00:09:22.310 98.00000% : 16131.938us 00:09:22.310 99.00000% : 21374.818us 00:09:22.310 99.50000% : 28029.243us 00:09:22.310 99.90000% : 29037.489us 00:09:22.310 99.99000% : 29239.138us 00:09:22.310 99.99900% : 29239.138us 00:09:22.310 99.99990% : 29239.138us 00:09:22.310 99.99999% : 29239.138us 00:09:22.310 00:09:22.310 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:22.310 ================================================================================= 00:09:22.310 1.00000% : 9527.926us 00:09:22.310 10.00000% : 10636.997us 00:09:22.310 25.00000% : 11443.594us 00:09:22.310 50.00000% : 12451.840us 00:09:22.310 75.00000% : 13510.498us 00:09:22.310 90.00000% : 14720.394us 00:09:22.310 95.00000% : 15526.991us 00:09:22.310 98.00000% : 16232.763us 00:09:22.310 99.00000% : 20971.520us 00:09:22.310 99.50000% : 28029.243us 00:09:22.310 99.90000% : 28835.840us 00:09:22.310 99.99000% : 29037.489us 00:09:22.310 99.99900% : 29037.489us 00:09:22.310 99.99990% : 29037.489us 00:09:22.310 99.99999% : 29037.489us 00:09:22.310 00:09:22.310 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:22.310 ================================================================================= 00:09:22.310 1.00000% : 9578.338us 00:09:22.310 10.00000% : 10788.234us 00:09:22.310 25.00000% : 11393.182us 00:09:22.310 50.00000% : 12451.840us 00:09:22.310 75.00000% : 13409.674us 00:09:22.310 90.00000% : 14619.569us 00:09:22.310 95.00000% : 15627.815us 00:09:22.310 98.00000% : 16434.412us 00:09:22.310 99.00000% : 20971.520us 00:09:22.310 99.50000% : 27625.945us 00:09:22.310 99.90000% : 28432.542us 00:09:22.310 99.99000% : 28634.191us 00:09:22.310 99.99900% : 28634.191us 00:09:22.310 99.99990% : 28634.191us 00:09:22.310 99.99999% : 28634.191us 00:09:22.310 00:09:22.310 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:22.310 ================================================================================= 00:09:22.310 1.00000% : 9628.751us 00:09:22.310 10.00000% : 10788.234us 00:09:22.310 25.00000% : 11393.182us 00:09:22.310 50.00000% : 12451.840us 00:09:22.310 75.00000% : 13510.498us 00:09:22.310 90.00000% : 14518.745us 00:09:22.310 95.00000% : 15325.342us 00:09:22.310 98.00000% : 16535.237us 00:09:22.310 99.00000% : 21778.117us 00:09:22.310 99.50000% : 28634.191us 00:09:22.310 99.90000% : 29440.788us 00:09:22.310 99.99000% : 29440.788us 00:09:22.310 99.99900% : 29642.437us 00:09:22.310 99.99990% : 29642.437us 00:09:22.310 99.99999% : 29642.437us 00:09:22.310 00:09:22.310 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:22.310 ================================================================================= 00:09:22.310 1.00000% : 9527.926us 00:09:22.310 10.00000% : 10687.409us 00:09:22.310 25.00000% : 11443.594us 00:09:22.310 50.00000% : 12401.428us 00:09:22.310 75.00000% : 13510.498us 00:09:22.310 90.00000% : 14518.745us 00:09:22.310 95.00000% : 15224.517us 00:09:22.310 98.00000% : 16333.588us 00:09:22.310 99.00000% : 21677.292us 00:09:22.310 99.50000% : 28634.191us 00:09:22.310 99.90000% : 29440.788us 00:09:22.310 99.99000% : 29642.437us 00:09:22.310 99.99900% : 29642.437us 00:09:22.310 99.99990% : 29642.437us 00:09:22.310 99.99999% : 29642.437us 00:09:22.310 00:09:22.310 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:22.310 ================================================================================= 00:09:22.310 1.00000% : 9326.277us 00:09:22.310 10.00000% : 10687.409us 00:09:22.310 25.00000% : 11342.769us 00:09:22.310 50.00000% : 12502.252us 00:09:22.310 75.00000% : 13510.498us 00:09:22.310 90.00000% : 14518.745us 00:09:22.310 95.00000% : 15123.692us 00:09:22.310 98.00000% : 16031.114us 00:09:22.310 99.00000% : 16535.237us 00:09:22.310 99.50000% : 21677.292us 00:09:22.310 99.90000% : 22383.065us 00:09:22.310 99.99000% : 22483.889us 00:09:22.310 99.99900% : 22483.889us 00:09:22.310 99.99990% : 22483.889us 00:09:22.310 99.99999% : 22483.889us 00:09:22.310 00:09:22.310 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:22.310 ============================================================================== 00:09:22.310 Range in us Cumulative IO count 00:09:22.311 8721.329 - 8771.742: 0.0098% ( 1) 00:09:22.311 8771.742 - 8822.154: 0.0884% ( 8) 00:09:22.311 8822.154 - 8872.566: 0.1867% ( 10) 00:09:22.311 8872.566 - 8922.978: 0.3833% ( 20) 00:09:22.311 8922.978 - 8973.391: 0.4324% ( 5) 00:09:22.311 8973.391 - 9023.803: 0.4717% ( 4) 00:09:22.311 9023.803 - 9074.215: 0.5110% ( 4) 00:09:22.311 9074.215 - 9124.628: 0.5601% ( 5) 00:09:22.311 9124.628 - 9175.040: 0.5994% ( 4) 00:09:22.311 9175.040 - 9225.452: 0.6289% ( 3) 00:09:22.311 9376.689 - 9427.102: 0.6977% ( 7) 00:09:22.311 9427.102 - 9477.514: 0.8058% ( 11) 00:09:22.311 9477.514 - 9527.926: 0.9139% ( 11) 00:09:22.311 9527.926 - 9578.338: 1.0711% ( 16) 00:09:22.311 9578.338 - 9628.751: 1.2284% ( 16) 00:09:22.311 9628.751 - 9679.163: 1.4053% ( 18) 00:09:22.311 9679.163 - 9729.575: 1.6903% ( 29) 00:09:22.311 9729.575 - 9779.988: 2.0440% ( 36) 00:09:22.311 9779.988 - 9830.400: 2.3192% ( 28) 00:09:22.311 9830.400 - 9880.812: 2.5550% ( 24) 00:09:22.311 9880.812 - 9931.225: 2.7811% ( 23) 00:09:22.311 9931.225 - 9981.637: 2.9874% ( 21) 00:09:22.311 9981.637 - 10032.049: 3.2626% ( 28) 00:09:22.311 10032.049 - 10082.462: 3.6458% ( 39) 00:09:22.311 10082.462 - 10132.874: 3.9406% ( 30) 00:09:22.311 10132.874 - 10183.286: 4.4222% ( 49) 00:09:22.311 10183.286 - 10233.698: 4.9037% ( 49) 00:09:22.311 10233.698 - 10284.111: 5.3557% ( 46) 00:09:22.311 10284.111 - 10334.523: 5.9257% ( 58) 00:09:22.311 10334.523 - 10384.935: 6.7217% ( 81) 00:09:22.311 10384.935 - 10435.348: 7.2229% ( 51) 00:09:22.311 10435.348 - 10485.760: 7.8322% ( 62) 00:09:22.311 10485.760 - 10536.172: 8.4611% ( 64) 00:09:22.311 10536.172 - 10586.585: 9.3848% ( 94) 00:09:22.311 10586.585 - 10636.997: 10.1808% ( 81) 00:09:22.311 10636.997 - 10687.409: 10.8589% ( 69) 00:09:22.311 10687.409 - 10737.822: 11.6450% ( 80) 00:09:22.311 10737.822 - 10788.234: 12.5295% ( 90) 00:09:22.311 10788.234 - 10838.646: 13.6105% ( 110) 00:09:22.311 10838.646 - 10889.058: 14.5047% ( 91) 00:09:22.311 10889.058 - 10939.471: 15.6348% ( 115) 00:09:22.311 10939.471 - 10989.883: 16.5782% ( 96) 00:09:22.311 10989.883 - 11040.295: 17.9049% ( 135) 00:09:22.311 11040.295 - 11090.708: 18.9858% ( 110) 00:09:22.311 11090.708 - 11141.120: 20.0275% ( 106) 00:09:22.311 11141.120 - 11191.532: 21.3247% ( 132) 00:09:22.311 11191.532 - 11241.945: 22.3172% ( 101) 00:09:22.311 11241.945 - 11292.357: 23.4277% ( 113) 00:09:22.311 11292.357 - 11342.769: 24.6659% ( 126) 00:09:22.311 11342.769 - 11393.182: 25.9041% ( 126) 00:09:22.311 11393.182 - 11443.594: 26.9654% ( 108) 00:09:22.311 11443.594 - 11494.006: 27.8400% ( 89) 00:09:22.311 11494.006 - 11544.418: 28.6851% ( 86) 00:09:22.311 11544.418 - 11594.831: 29.6678% ( 100) 00:09:22.311 11594.831 - 11645.243: 30.7390% ( 109) 00:09:22.311 11645.243 - 11695.655: 31.8888% ( 117) 00:09:22.311 11695.655 - 11746.068: 33.0877% ( 122) 00:09:22.311 11746.068 - 11796.480: 34.3750% ( 131) 00:09:22.311 11796.480 - 11846.892: 35.4167% ( 106) 00:09:22.311 11846.892 - 11897.305: 36.5959% ( 120) 00:09:22.311 11897.305 - 11947.717: 37.9815% ( 141) 00:09:22.311 11947.717 - 11998.129: 39.0330% ( 107) 00:09:22.311 11998.129 - 12048.542: 40.0452% ( 103) 00:09:22.311 12048.542 - 12098.954: 41.0869% ( 106) 00:09:22.311 12098.954 - 12149.366: 42.5118% ( 145) 00:09:22.311 12149.366 - 12199.778: 43.9072% ( 142) 00:09:22.311 12199.778 - 12250.191: 45.5090% ( 163) 00:09:22.311 12250.191 - 12300.603: 47.1207% ( 164) 00:09:22.311 12300.603 - 12351.015: 48.7323% ( 164) 00:09:22.311 12351.015 - 12401.428: 50.2555% ( 155) 00:09:22.311 12401.428 - 12451.840: 51.2873% ( 105) 00:09:22.311 12451.840 - 12502.252: 52.2013% ( 93) 00:09:22.311 12502.252 - 12552.665: 53.2134% ( 103) 00:09:22.311 12552.665 - 12603.077: 54.1667% ( 97) 00:09:22.311 12603.077 - 12653.489: 54.9921% ( 84) 00:09:22.311 12653.489 - 12703.902: 55.7488% ( 77) 00:09:22.311 12703.902 - 12754.314: 56.7610% ( 103) 00:09:22.311 12754.314 - 12804.726: 57.7830% ( 104) 00:09:22.311 12804.726 - 12855.138: 58.7657% ( 100) 00:09:22.311 12855.138 - 12905.551: 60.0236% ( 128) 00:09:22.311 12905.551 - 13006.375: 62.6474% ( 267) 00:09:22.311 13006.375 - 13107.200: 65.3204% ( 272) 00:09:22.311 13107.200 - 13208.025: 67.6395% ( 236) 00:09:22.311 13208.025 - 13308.849: 69.5755% ( 197) 00:09:22.311 13308.849 - 13409.674: 71.8750% ( 234) 00:09:22.311 13409.674 - 13510.498: 74.0271% ( 219) 00:09:22.311 13510.498 - 13611.323: 76.4937% ( 251) 00:09:22.311 13611.323 - 13712.148: 79.4517% ( 301) 00:09:22.311 13712.148 - 13812.972: 81.5252% ( 211) 00:09:22.311 13812.972 - 13913.797: 83.1564% ( 166) 00:09:22.311 13913.797 - 14014.622: 84.7877% ( 166) 00:09:22.311 14014.622 - 14115.446: 86.0947% ( 133) 00:09:22.311 14115.446 - 14216.271: 87.2642% ( 119) 00:09:22.311 14216.271 - 14317.095: 88.3550% ( 111) 00:09:22.311 14317.095 - 14417.920: 89.5735% ( 124) 00:09:22.311 14417.920 - 14518.745: 90.5660% ( 101) 00:09:22.311 14518.745 - 14619.569: 91.6765% ( 113) 00:09:22.311 14619.569 - 14720.394: 92.3349% ( 67) 00:09:22.311 14720.394 - 14821.218: 92.8459% ( 52) 00:09:22.311 14821.218 - 14922.043: 93.1309% ( 29) 00:09:22.311 14922.043 - 15022.868: 93.3766% ( 25) 00:09:22.311 15022.868 - 15123.692: 93.6910% ( 32) 00:09:22.311 15123.692 - 15224.517: 93.9564% ( 27) 00:09:22.311 15224.517 - 15325.342: 94.4084% ( 46) 00:09:22.311 15325.342 - 15426.166: 94.6934% ( 29) 00:09:22.311 15426.166 - 15526.991: 94.9391% ( 25) 00:09:22.311 15526.991 - 15627.815: 95.2732% ( 34) 00:09:22.311 15627.815 - 15728.640: 95.9316% ( 67) 00:09:22.311 15728.640 - 15829.465: 96.5311% ( 61) 00:09:22.311 15829.465 - 15930.289: 97.1796% ( 66) 00:09:22.311 15930.289 - 16031.114: 97.7201% ( 55) 00:09:22.311 16031.114 - 16131.938: 98.0444% ( 33) 00:09:22.311 16131.938 - 16232.763: 98.2803% ( 24) 00:09:22.311 16232.763 - 16333.588: 98.4473% ( 17) 00:09:22.311 16333.588 - 16434.412: 98.5456% ( 10) 00:09:22.311 16434.412 - 16535.237: 98.6439% ( 10) 00:09:22.311 16535.237 - 16636.062: 98.7323% ( 9) 00:09:22.311 16636.062 - 16736.886: 98.7421% ( 1) 00:09:22.311 20971.520 - 21072.345: 98.7716% ( 3) 00:09:22.311 21072.345 - 21173.169: 98.8797% ( 11) 00:09:22.311 21173.169 - 21273.994: 98.9682% ( 9) 00:09:22.311 21273.994 - 21374.818: 99.0861% ( 12) 00:09:22.311 21374.818 - 21475.643: 99.1450% ( 6) 00:09:22.311 21475.643 - 21576.468: 99.2040% ( 6) 00:09:22.311 21576.468 - 21677.292: 99.2630% ( 6) 00:09:22.311 21677.292 - 21778.117: 99.3121% ( 5) 00:09:22.311 21778.117 - 21878.942: 99.3711% ( 6) 00:09:22.311 27222.646 - 27424.295: 99.4006% ( 3) 00:09:22.311 27424.295 - 27625.945: 99.4595% ( 6) 00:09:22.311 27625.945 - 27827.594: 99.4988% ( 4) 00:09:22.311 27827.594 - 28029.243: 99.5185% ( 2) 00:09:22.311 28029.243 - 28230.892: 99.5774% ( 6) 00:09:22.311 28230.892 - 28432.542: 99.6855% ( 11) 00:09:22.311 28432.542 - 28634.191: 99.7838% ( 10) 00:09:22.311 28634.191 - 28835.840: 99.8821% ( 10) 00:09:22.311 28835.840 - 29037.489: 99.9803% ( 10) 00:09:22.311 29037.489 - 29239.138: 100.0000% ( 2) 00:09:22.311 00:09:22.311 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:22.311 ============================================================================== 00:09:22.311 Range in us Cumulative IO count 00:09:22.311 8166.794 - 8217.206: 0.0098% ( 1) 00:09:22.311 8217.206 - 8267.618: 0.0491% ( 4) 00:09:22.311 8267.618 - 8318.031: 0.0884% ( 4) 00:09:22.311 8318.031 - 8368.443: 0.1081% ( 2) 00:09:22.311 8368.443 - 8418.855: 0.1278% ( 2) 00:09:22.311 8418.855 - 8469.268: 0.1474% ( 2) 00:09:22.311 8469.268 - 8519.680: 0.1867% ( 4) 00:09:22.311 8519.680 - 8570.092: 0.2162% ( 3) 00:09:22.311 8570.092 - 8620.505: 0.2457% ( 3) 00:09:22.311 8620.505 - 8670.917: 0.2948% ( 5) 00:09:22.311 8670.917 - 8721.329: 0.5503% ( 26) 00:09:22.311 8721.329 - 8771.742: 0.5700% ( 2) 00:09:22.311 8771.742 - 8822.154: 0.5798% ( 1) 00:09:22.311 8973.391 - 9023.803: 0.6093% ( 3) 00:09:22.311 9023.803 - 9074.215: 0.6289% ( 2) 00:09:22.311 9275.865 - 9326.277: 0.6486% ( 2) 00:09:22.311 9326.277 - 9376.689: 0.6977% ( 5) 00:09:22.311 9376.689 - 9427.102: 0.7567% ( 6) 00:09:22.311 9427.102 - 9477.514: 0.8746% ( 12) 00:09:22.311 9477.514 - 9527.926: 1.0515% ( 18) 00:09:22.311 9527.926 - 9578.338: 1.3267% ( 28) 00:09:22.311 9578.338 - 9628.751: 1.6411% ( 32) 00:09:22.311 9628.751 - 9679.163: 1.9949% ( 36) 00:09:22.311 9679.163 - 9729.575: 2.2897% ( 30) 00:09:22.311 9729.575 - 9779.988: 2.5747% ( 29) 00:09:22.311 9779.988 - 9830.400: 2.8892% ( 32) 00:09:22.311 9830.400 - 9880.812: 3.2528% ( 37) 00:09:22.311 9880.812 - 9931.225: 3.6164% ( 37) 00:09:22.311 9931.225 - 9981.637: 3.8915% ( 28) 00:09:22.311 9981.637 - 10032.049: 4.1274% ( 24) 00:09:22.311 10032.049 - 10082.462: 4.4615% ( 34) 00:09:22.311 10082.462 - 10132.874: 4.7170% ( 26) 00:09:22.311 10132.874 - 10183.286: 5.1297% ( 42) 00:09:22.311 10183.286 - 10233.698: 5.5130% ( 39) 00:09:22.311 10233.698 - 10284.111: 5.8569% ( 35) 00:09:22.311 10284.111 - 10334.523: 6.2795% ( 43) 00:09:22.311 10334.523 - 10384.935: 6.8789% ( 61) 00:09:22.311 10384.935 - 10435.348: 7.3998% ( 53) 00:09:22.311 10435.348 - 10485.760: 7.9501% ( 56) 00:09:22.311 10485.760 - 10536.172: 8.5790% ( 64) 00:09:22.311 10536.172 - 10586.585: 9.4929% ( 93) 00:09:22.311 10586.585 - 10636.997: 10.3282% ( 85) 00:09:22.311 10636.997 - 10687.409: 11.0161% ( 70) 00:09:22.311 10687.409 - 10737.822: 11.6942% ( 69) 00:09:22.311 10737.822 - 10788.234: 12.3526% ( 67) 00:09:22.311 10788.234 - 10838.646: 13.0798% ( 74) 00:09:22.311 10838.646 - 10889.058: 13.7677% ( 70) 00:09:22.311 10889.058 - 10939.471: 14.6718% ( 92) 00:09:22.312 10939.471 - 10989.883: 15.7528% ( 110) 00:09:22.312 10989.883 - 11040.295: 16.8829% ( 115) 00:09:22.312 11040.295 - 11090.708: 17.9835% ( 112) 00:09:22.312 11090.708 - 11141.120: 18.9662% ( 100) 00:09:22.312 11141.120 - 11191.532: 19.9587% ( 101) 00:09:22.312 11191.532 - 11241.945: 20.9414% ( 100) 00:09:22.312 11241.945 - 11292.357: 22.0519% ( 113) 00:09:22.312 11292.357 - 11342.769: 23.0542% ( 102) 00:09:22.312 11342.769 - 11393.182: 24.2630% ( 123) 00:09:22.312 11393.182 - 11443.594: 25.5896% ( 135) 00:09:22.312 11443.594 - 11494.006: 26.8868% ( 132) 00:09:22.312 11494.006 - 11544.418: 28.2822% ( 142) 00:09:22.312 11544.418 - 11594.831: 29.6089% ( 135) 00:09:22.312 11594.831 - 11645.243: 30.9061% ( 132) 00:09:22.312 11645.243 - 11695.655: 31.9379% ( 105) 00:09:22.312 11695.655 - 11746.068: 33.4316% ( 152) 00:09:22.312 11746.068 - 11796.480: 34.6796% ( 127) 00:09:22.312 11796.480 - 11846.892: 36.1340% ( 148) 00:09:22.312 11846.892 - 11897.305: 37.6179% ( 151) 00:09:22.312 11897.305 - 11947.717: 38.6498% ( 105) 00:09:22.312 11947.717 - 11998.129: 39.8781% ( 125) 00:09:22.312 11998.129 - 12048.542: 40.9395% ( 108) 00:09:22.312 12048.542 - 12098.954: 42.1384% ( 122) 00:09:22.312 12098.954 - 12149.366: 43.1997% ( 108) 00:09:22.312 12149.366 - 12199.778: 44.3986% ( 122) 00:09:22.312 12199.778 - 12250.191: 45.7449% ( 137) 00:09:22.312 12250.191 - 12300.603: 46.8947% ( 117) 00:09:22.312 12300.603 - 12351.015: 48.0936% ( 122) 00:09:22.312 12351.015 - 12401.428: 49.4693% ( 140) 00:09:22.312 12401.428 - 12451.840: 50.9336% ( 149) 00:09:22.312 12451.840 - 12502.252: 52.5354% ( 163) 00:09:22.312 12502.252 - 12552.665: 54.0782% ( 157) 00:09:22.312 12552.665 - 12603.077: 55.2575% ( 120) 00:09:22.312 12603.077 - 12653.489: 56.5350% ( 130) 00:09:22.312 12653.489 - 12703.902: 57.7437% ( 123) 00:09:22.312 12703.902 - 12754.314: 58.7166% ( 99) 00:09:22.312 12754.314 - 12804.726: 59.9744% ( 128) 00:09:22.312 12804.726 - 12855.138: 60.9277% ( 97) 00:09:22.312 12855.138 - 12905.551: 62.0676% ( 116) 00:09:22.312 12905.551 - 13006.375: 64.6128% ( 259) 00:09:22.312 13006.375 - 13107.200: 67.1678% ( 260) 00:09:22.312 13107.200 - 13208.025: 69.4870% ( 236) 00:09:22.312 13208.025 - 13308.849: 71.6195% ( 217) 00:09:22.312 13308.849 - 13409.674: 73.8797% ( 230) 00:09:22.312 13409.674 - 13510.498: 76.1989% ( 236) 00:09:22.312 13510.498 - 13611.323: 78.2724% ( 211) 00:09:22.312 13611.323 - 13712.148: 79.9332% ( 169) 00:09:22.312 13712.148 - 13812.972: 81.4564% ( 155) 00:09:22.312 13812.972 - 13913.797: 82.7928% ( 136) 00:09:22.312 13913.797 - 14014.622: 83.9230% ( 115) 00:09:22.312 14014.622 - 14115.446: 85.1022% ( 120) 00:09:22.312 14115.446 - 14216.271: 85.8491% ( 76) 00:09:22.312 14216.271 - 14317.095: 86.9006% ( 107) 00:09:22.312 14317.095 - 14417.920: 88.0307% ( 115) 00:09:22.312 14417.920 - 14518.745: 89.0428% ( 103) 00:09:22.312 14518.745 - 14619.569: 89.9568% ( 93) 00:09:22.312 14619.569 - 14720.394: 90.5955% ( 65) 00:09:22.312 14720.394 - 14821.218: 91.1753% ( 59) 00:09:22.312 14821.218 - 14922.043: 91.6961% ( 53) 00:09:22.312 14922.043 - 15022.868: 92.2170% ( 53) 00:09:22.312 15022.868 - 15123.692: 92.7182% ( 51) 00:09:22.312 15123.692 - 15224.517: 93.3373% ( 63) 00:09:22.312 15224.517 - 15325.342: 94.0743% ( 75) 00:09:22.312 15325.342 - 15426.166: 94.7818% ( 72) 00:09:22.312 15426.166 - 15526.991: 95.4009% ( 63) 00:09:22.312 15526.991 - 15627.815: 95.9611% ( 57) 00:09:22.312 15627.815 - 15728.640: 96.3935% ( 44) 00:09:22.312 15728.640 - 15829.465: 96.9045% ( 52) 00:09:22.312 15829.465 - 15930.289: 97.3172% ( 42) 00:09:22.312 15930.289 - 16031.114: 97.5334% ( 22) 00:09:22.312 16031.114 - 16131.938: 97.7693% ( 24) 00:09:22.312 16131.938 - 16232.763: 98.0542% ( 29) 00:09:22.312 16232.763 - 16333.588: 98.2213% ( 17) 00:09:22.312 16333.588 - 16434.412: 98.3491% ( 13) 00:09:22.312 16434.412 - 16535.237: 98.3884% ( 4) 00:09:22.312 16535.237 - 16636.062: 98.4670% ( 8) 00:09:22.312 16636.062 - 16736.886: 98.5259% ( 6) 00:09:22.312 16837.711 - 16938.535: 98.5947% ( 7) 00:09:22.312 17039.360 - 17140.185: 98.6439% ( 5) 00:09:22.312 17140.185 - 17241.009: 98.6537% ( 1) 00:09:22.312 17241.009 - 17341.834: 98.6930% ( 4) 00:09:22.312 17341.834 - 17442.658: 98.7323% ( 4) 00:09:22.312 17442.658 - 17543.483: 98.7421% ( 1) 00:09:22.312 20164.923 - 20265.748: 98.7520% ( 1) 00:09:22.312 20265.748 - 20366.572: 98.7618% ( 1) 00:09:22.312 20366.572 - 20467.397: 98.7814% ( 2) 00:09:22.312 20467.397 - 20568.222: 98.8699% ( 9) 00:09:22.312 20568.222 - 20669.046: 98.9092% ( 4) 00:09:22.312 20669.046 - 20769.871: 98.9190% ( 1) 00:09:22.312 20769.871 - 20870.695: 98.9682% ( 5) 00:09:22.312 20870.695 - 20971.520: 99.0075% ( 4) 00:09:22.312 20971.520 - 21072.345: 99.0369% ( 3) 00:09:22.312 21072.345 - 21173.169: 99.0664% ( 3) 00:09:22.312 21173.169 - 21273.994: 99.0959% ( 3) 00:09:22.312 21273.994 - 21374.818: 99.1254% ( 3) 00:09:22.312 21374.818 - 21475.643: 99.1549% ( 3) 00:09:22.312 21475.643 - 21576.468: 99.2040% ( 5) 00:09:22.312 21576.468 - 21677.292: 99.2433% ( 4) 00:09:22.312 21677.292 - 21778.117: 99.2826% ( 4) 00:09:22.312 21778.117 - 21878.942: 99.3219% ( 4) 00:09:22.312 21878.942 - 21979.766: 99.3612% ( 4) 00:09:22.312 21979.766 - 22080.591: 99.3711% ( 1) 00:09:22.312 27424.295 - 27625.945: 99.3809% ( 1) 00:09:22.312 27625.945 - 27827.594: 99.4595% ( 8) 00:09:22.312 27827.594 - 28029.243: 99.5480% ( 9) 00:09:22.312 28029.243 - 28230.892: 99.6069% ( 6) 00:09:22.312 28230.892 - 28432.542: 99.7150% ( 11) 00:09:22.312 28432.542 - 28634.191: 99.8231% ( 11) 00:09:22.312 28634.191 - 28835.840: 99.9116% ( 9) 00:09:22.312 28835.840 - 29037.489: 100.0000% ( 9) 00:09:22.312 00:09:22.312 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:22.312 ============================================================================== 00:09:22.312 Range in us Cumulative IO count 00:09:22.312 7713.083 - 7763.495: 0.0295% ( 3) 00:09:22.312 7763.495 - 7813.908: 0.0983% ( 7) 00:09:22.312 7813.908 - 7864.320: 0.1474% ( 5) 00:09:22.312 7864.320 - 7914.732: 0.3046% ( 16) 00:09:22.312 7914.732 - 7965.145: 0.3636% ( 6) 00:09:22.312 7965.145 - 8015.557: 0.3931% ( 3) 00:09:22.312 8015.557 - 8065.969: 0.4324% ( 4) 00:09:22.312 8065.969 - 8116.382: 0.4619% ( 3) 00:09:22.312 8116.382 - 8166.794: 0.5012% ( 4) 00:09:22.312 8166.794 - 8217.206: 0.5405% ( 4) 00:09:22.312 8217.206 - 8267.618: 0.5700% ( 3) 00:09:22.312 8267.618 - 8318.031: 0.5994% ( 3) 00:09:22.312 8318.031 - 8368.443: 0.6289% ( 3) 00:09:22.312 9275.865 - 9326.277: 0.6388% ( 1) 00:09:22.312 9326.277 - 9376.689: 0.6977% ( 6) 00:09:22.312 9376.689 - 9427.102: 0.7370% ( 4) 00:09:22.312 9427.102 - 9477.514: 0.8058% ( 7) 00:09:22.312 9477.514 - 9527.926: 0.8844% ( 8) 00:09:22.312 9527.926 - 9578.338: 1.0318% ( 15) 00:09:22.312 9578.338 - 9628.751: 1.1596% ( 13) 00:09:22.312 9628.751 - 9679.163: 1.3365% ( 18) 00:09:22.312 9679.163 - 9729.575: 1.5625% ( 23) 00:09:22.312 9729.575 - 9779.988: 1.9556% ( 40) 00:09:22.312 9779.988 - 9830.400: 2.3683% ( 42) 00:09:22.312 9830.400 - 9880.812: 2.6631% ( 30) 00:09:22.312 9880.812 - 9931.225: 2.9874% ( 33) 00:09:22.312 9931.225 - 9981.637: 3.5279% ( 55) 00:09:22.312 9981.637 - 10032.049: 3.9996% ( 48) 00:09:22.312 10032.049 - 10082.462: 4.2453% ( 25) 00:09:22.312 10082.462 - 10132.874: 4.6285% ( 39) 00:09:22.312 10132.874 - 10183.286: 4.9332% ( 31) 00:09:22.312 10183.286 - 10233.698: 5.1690% ( 24) 00:09:22.312 10233.698 - 10284.111: 5.3950% ( 23) 00:09:22.312 10284.111 - 10334.523: 5.6702% ( 28) 00:09:22.312 10334.523 - 10384.935: 6.0240% ( 36) 00:09:22.312 10384.935 - 10435.348: 6.3974% ( 38) 00:09:22.312 10435.348 - 10485.760: 6.7708% ( 38) 00:09:22.312 10485.760 - 10536.172: 7.1934% ( 43) 00:09:22.312 10536.172 - 10586.585: 7.8715% ( 69) 00:09:22.312 10586.585 - 10636.997: 8.5102% ( 65) 00:09:22.312 10636.997 - 10687.409: 9.2276% ( 73) 00:09:22.312 10687.409 - 10737.822: 9.9450% ( 73) 00:09:22.312 10737.822 - 10788.234: 10.6820% ( 75) 00:09:22.312 10788.234 - 10838.646: 11.4682% ( 80) 00:09:22.312 10838.646 - 10889.058: 12.3329% ( 88) 00:09:22.312 10889.058 - 10939.471: 13.2075% ( 89) 00:09:22.312 10939.471 - 10989.883: 14.2983% ( 111) 00:09:22.312 10989.883 - 11040.295: 15.7822% ( 151) 00:09:22.312 11040.295 - 11090.708: 17.2759% ( 152) 00:09:22.312 11090.708 - 11141.120: 18.5142% ( 126) 00:09:22.312 11141.120 - 11191.532: 19.7425% ( 125) 00:09:22.312 11191.532 - 11241.945: 21.3345% ( 162) 00:09:22.312 11241.945 - 11292.357: 22.7693% ( 146) 00:09:22.312 11292.357 - 11342.769: 24.2433% ( 150) 00:09:22.312 11342.769 - 11393.182: 25.3931% ( 117) 00:09:22.312 11393.182 - 11443.594: 26.5723% ( 120) 00:09:22.312 11443.594 - 11494.006: 27.4764% ( 92) 00:09:22.312 11494.006 - 11544.418: 28.3805% ( 92) 00:09:22.312 11544.418 - 11594.831: 29.3042% ( 94) 00:09:22.312 11594.831 - 11645.243: 30.4147% ( 113) 00:09:22.312 11645.243 - 11695.655: 31.5939% ( 120) 00:09:22.312 11695.655 - 11746.068: 33.0483% ( 148) 00:09:22.312 11746.068 - 11796.480: 34.5224% ( 150) 00:09:22.312 11796.480 - 11846.892: 35.8491% ( 135) 00:09:22.312 11846.892 - 11897.305: 37.1659% ( 134) 00:09:22.312 11897.305 - 11947.717: 38.1879% ( 104) 00:09:22.312 11947.717 - 11998.129: 39.5145% ( 135) 00:09:22.312 11998.129 - 12048.542: 40.8805% ( 139) 00:09:22.312 12048.542 - 12098.954: 41.9910% ( 113) 00:09:22.312 12098.954 - 12149.366: 43.2390% ( 127) 00:09:22.312 12149.366 - 12199.778: 44.3691% ( 115) 00:09:22.312 12199.778 - 12250.191: 45.4009% ( 105) 00:09:22.312 12250.191 - 12300.603: 46.5900% ( 121) 00:09:22.312 12300.603 - 12351.015: 47.9068% ( 134) 00:09:22.313 12351.015 - 12401.428: 49.1057% ( 122) 00:09:22.313 12401.428 - 12451.840: 50.4226% ( 134) 00:09:22.313 12451.840 - 12502.252: 51.7492% ( 135) 00:09:22.313 12502.252 - 12552.665: 52.9383% ( 121) 00:09:22.313 12552.665 - 12603.077: 54.1274% ( 121) 00:09:22.313 12603.077 - 12653.489: 55.5523% ( 145) 00:09:22.313 12653.489 - 12703.902: 57.1246% ( 160) 00:09:22.313 12703.902 - 12754.314: 58.7264% ( 163) 00:09:22.313 12754.314 - 12804.726: 60.1317% ( 143) 00:09:22.313 12804.726 - 12855.138: 61.5861% ( 148) 00:09:22.313 12855.138 - 12905.551: 62.8734% ( 131) 00:09:22.313 12905.551 - 13006.375: 66.0967% ( 328) 00:09:22.313 13006.375 - 13107.200: 68.8384% ( 279) 00:09:22.313 13107.200 - 13208.025: 71.1969% ( 240) 00:09:22.313 13208.025 - 13308.849: 73.4080% ( 225) 00:09:22.313 13308.849 - 13409.674: 75.5700% ( 220) 00:09:22.313 13409.674 - 13510.498: 77.4764% ( 194) 00:09:22.313 13510.498 - 13611.323: 79.1372% ( 169) 00:09:22.313 13611.323 - 13712.148: 80.8962% ( 179) 00:09:22.313 13712.148 - 13812.972: 82.4587% ( 159) 00:09:22.313 13812.972 - 13913.797: 83.9819% ( 155) 00:09:22.313 13913.797 - 14014.622: 85.4461% ( 149) 00:09:22.313 14014.622 - 14115.446: 86.5861% ( 116) 00:09:22.313 14115.446 - 14216.271: 87.4803% ( 91) 00:09:22.313 14216.271 - 14317.095: 88.4336% ( 97) 00:09:22.313 14317.095 - 14417.920: 89.1903% ( 77) 00:09:22.313 14417.920 - 14518.745: 89.8388% ( 66) 00:09:22.313 14518.745 - 14619.569: 90.4776% ( 65) 00:09:22.313 14619.569 - 14720.394: 91.0967% ( 63) 00:09:22.313 14720.394 - 14821.218: 91.4406% ( 35) 00:09:22.313 14821.218 - 14922.043: 91.7944% ( 36) 00:09:22.313 14922.043 - 15022.868: 92.1482% ( 36) 00:09:22.313 15022.868 - 15123.692: 92.6101% ( 47) 00:09:22.313 15123.692 - 15224.517: 93.0425% ( 44) 00:09:22.313 15224.517 - 15325.342: 93.6124% ( 58) 00:09:22.313 15325.342 - 15426.166: 94.2315% ( 63) 00:09:22.313 15426.166 - 15526.991: 94.9980% ( 78) 00:09:22.313 15526.991 - 15627.815: 95.7056% ( 72) 00:09:22.313 15627.815 - 15728.640: 96.3542% ( 66) 00:09:22.313 15728.640 - 15829.465: 96.9340% ( 59) 00:09:22.313 15829.465 - 15930.289: 97.3565% ( 43) 00:09:22.313 15930.289 - 16031.114: 97.5924% ( 24) 00:09:22.313 16031.114 - 16131.938: 97.7693% ( 18) 00:09:22.313 16131.938 - 16232.763: 97.8774% ( 11) 00:09:22.313 16232.763 - 16333.588: 97.9658% ( 9) 00:09:22.313 16333.588 - 16434.412: 98.0739% ( 11) 00:09:22.313 16434.412 - 16535.237: 98.2017% ( 13) 00:09:22.313 16535.237 - 16636.062: 98.2704% ( 7) 00:09:22.313 16636.062 - 16736.886: 98.3097% ( 4) 00:09:22.313 16736.886 - 16837.711: 98.3589% ( 5) 00:09:22.313 16837.711 - 16938.535: 98.4080% ( 5) 00:09:22.313 16938.535 - 17039.360: 98.4572% ( 5) 00:09:22.313 17039.360 - 17140.185: 98.5063% ( 5) 00:09:22.313 17140.185 - 17241.009: 98.5554% ( 5) 00:09:22.313 17241.009 - 17341.834: 98.6046% ( 5) 00:09:22.313 17341.834 - 17442.658: 98.6439% ( 4) 00:09:22.313 17442.658 - 17543.483: 98.6930% ( 5) 00:09:22.313 17543.483 - 17644.308: 98.7323% ( 4) 00:09:22.313 17644.308 - 17745.132: 98.7421% ( 1) 00:09:22.313 20265.748 - 20366.572: 98.7520% ( 1) 00:09:22.313 20366.572 - 20467.397: 98.8011% ( 5) 00:09:22.313 20467.397 - 20568.222: 98.8404% ( 4) 00:09:22.313 20568.222 - 20669.046: 98.8895% ( 5) 00:09:22.313 20669.046 - 20769.871: 98.9289% ( 4) 00:09:22.313 20769.871 - 20870.695: 98.9780% ( 5) 00:09:22.313 20870.695 - 20971.520: 99.0173% ( 4) 00:09:22.313 20971.520 - 21072.345: 99.0566% ( 4) 00:09:22.313 21072.345 - 21173.169: 99.0959% ( 4) 00:09:22.313 21173.169 - 21273.994: 99.1254% ( 3) 00:09:22.313 21273.994 - 21374.818: 99.1745% ( 5) 00:09:22.313 21374.818 - 21475.643: 99.2138% ( 4) 00:09:22.313 21475.643 - 21576.468: 99.2531% ( 4) 00:09:22.313 21576.468 - 21677.292: 99.2925% ( 4) 00:09:22.313 21677.292 - 21778.117: 99.3416% ( 5) 00:09:22.313 21778.117 - 21878.942: 99.3711% ( 3) 00:09:22.313 27222.646 - 27424.295: 99.4399% ( 7) 00:09:22.313 27424.295 - 27625.945: 99.5381% ( 10) 00:09:22.313 27625.945 - 27827.594: 99.6462% ( 11) 00:09:22.313 27827.594 - 28029.243: 99.7445% ( 10) 00:09:22.313 28029.243 - 28230.892: 99.8428% ( 10) 00:09:22.313 28230.892 - 28432.542: 99.9410% ( 10) 00:09:22.313 28432.542 - 28634.191: 100.0000% ( 6) 00:09:22.313 00:09:22.313 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:22.313 ============================================================================== 00:09:22.313 Range in us Cumulative IO count 00:09:22.313 6503.188 - 6553.600: 0.0098% ( 1) 00:09:22.313 6654.425 - 6704.837: 0.0688% ( 6) 00:09:22.313 6704.837 - 6755.249: 0.2260% ( 16) 00:09:22.313 6755.249 - 6805.662: 0.3243% ( 10) 00:09:22.313 6805.662 - 6856.074: 0.4127% ( 9) 00:09:22.313 6856.074 - 6906.486: 0.4422% ( 3) 00:09:22.313 6906.486 - 6956.898: 0.4815% ( 4) 00:09:22.313 6956.898 - 7007.311: 0.5110% ( 3) 00:09:22.313 7007.311 - 7057.723: 0.5307% ( 2) 00:09:22.313 7057.723 - 7108.135: 0.5601% ( 3) 00:09:22.313 7108.135 - 7158.548: 0.5896% ( 3) 00:09:22.313 7158.548 - 7208.960: 0.6191% ( 3) 00:09:22.313 7208.960 - 7259.372: 0.6289% ( 1) 00:09:22.313 9275.865 - 9326.277: 0.6388% ( 1) 00:09:22.313 9376.689 - 9427.102: 0.6486% ( 1) 00:09:22.313 9427.102 - 9477.514: 0.6781% ( 3) 00:09:22.313 9477.514 - 9527.926: 0.7469% ( 7) 00:09:22.313 9527.926 - 9578.338: 0.9139% ( 17) 00:09:22.313 9578.338 - 9628.751: 1.1105% ( 20) 00:09:22.313 9628.751 - 9679.163: 1.4151% ( 31) 00:09:22.313 9679.163 - 9729.575: 1.8082% ( 40) 00:09:22.313 9729.575 - 9779.988: 2.2013% ( 40) 00:09:22.313 9779.988 - 9830.400: 2.5059% ( 31) 00:09:22.313 9830.400 - 9880.812: 2.8597% ( 36) 00:09:22.313 9880.812 - 9931.225: 3.3805% ( 53) 00:09:22.313 9931.225 - 9981.637: 4.1175% ( 75) 00:09:22.313 9981.637 - 10032.049: 4.6187% ( 51) 00:09:22.313 10032.049 - 10082.462: 4.9823% ( 37) 00:09:22.313 10082.462 - 10132.874: 5.2575% ( 28) 00:09:22.313 10132.874 - 10183.286: 5.5130% ( 26) 00:09:22.313 10183.286 - 10233.698: 5.7980% ( 29) 00:09:22.313 10233.698 - 10284.111: 6.0731% ( 28) 00:09:22.313 10284.111 - 10334.523: 6.2500% ( 18) 00:09:22.313 10334.523 - 10384.935: 6.5153% ( 27) 00:09:22.313 10384.935 - 10435.348: 6.7610% ( 25) 00:09:22.313 10435.348 - 10485.760: 7.1148% ( 36) 00:09:22.313 10485.760 - 10536.172: 7.4686% ( 36) 00:09:22.313 10536.172 - 10586.585: 7.8420% ( 38) 00:09:22.313 10586.585 - 10636.997: 8.3726% ( 54) 00:09:22.313 10636.997 - 10687.409: 8.9426% ( 58) 00:09:22.313 10687.409 - 10737.822: 9.5912% ( 66) 00:09:22.313 10737.822 - 10788.234: 10.4560% ( 88) 00:09:22.313 10788.234 - 10838.646: 11.4485% ( 101) 00:09:22.313 10838.646 - 10889.058: 12.4803% ( 105) 00:09:22.313 10889.058 - 10939.471: 13.3451% ( 88) 00:09:22.313 10939.471 - 10989.883: 14.5932% ( 127) 00:09:22.313 10989.883 - 11040.295: 15.6447% ( 107) 00:09:22.313 11040.295 - 11090.708: 16.9418% ( 132) 00:09:22.313 11090.708 - 11141.120: 18.2881% ( 137) 00:09:22.313 11141.120 - 11191.532: 19.6934% ( 143) 00:09:22.313 11191.532 - 11241.945: 21.0397% ( 137) 00:09:22.313 11241.945 - 11292.357: 22.6710% ( 166) 00:09:22.313 11292.357 - 11342.769: 24.3612% ( 172) 00:09:22.313 11342.769 - 11393.182: 25.8156% ( 148) 00:09:22.313 11393.182 - 11443.594: 27.4568% ( 167) 00:09:22.313 11443.594 - 11494.006: 28.7932% ( 136) 00:09:22.313 11494.006 - 11544.418: 30.1297% ( 136) 00:09:22.313 11544.418 - 11594.831: 31.7905% ( 169) 00:09:22.313 11594.831 - 11645.243: 33.1466% ( 138) 00:09:22.313 11645.243 - 11695.655: 34.3160% ( 119) 00:09:22.313 11695.655 - 11746.068: 35.3381% ( 104) 00:09:22.313 11746.068 - 11796.480: 36.4682% ( 115) 00:09:22.313 11796.480 - 11846.892: 37.4705% ( 102) 00:09:22.313 11846.892 - 11897.305: 38.5318% ( 108) 00:09:22.313 11897.305 - 11947.717: 39.4949% ( 98) 00:09:22.313 11947.717 - 11998.129: 40.5267% ( 105) 00:09:22.313 11998.129 - 12048.542: 41.7158% ( 121) 00:09:22.313 12048.542 - 12098.954: 42.8656% ( 117) 00:09:22.313 12098.954 - 12149.366: 43.9760% ( 113) 00:09:22.313 12149.366 - 12199.778: 45.2044% ( 125) 00:09:22.313 12199.778 - 12250.191: 46.2657% ( 108) 00:09:22.313 12250.191 - 12300.603: 47.1993% ( 95) 00:09:22.313 12300.603 - 12351.015: 48.1525% ( 97) 00:09:22.313 12351.015 - 12401.428: 49.4693% ( 134) 00:09:22.313 12401.428 - 12451.840: 50.8058% ( 136) 00:09:22.313 12451.840 - 12502.252: 51.8278% ( 104) 00:09:22.313 12502.252 - 12552.665: 53.0759% ( 127) 00:09:22.313 12552.665 - 12603.077: 54.6482% ( 160) 00:09:22.313 12603.077 - 12653.489: 56.0240% ( 140) 00:09:22.313 12653.489 - 12703.902: 57.2818% ( 128) 00:09:22.313 12703.902 - 12754.314: 58.5594% ( 130) 00:09:22.313 12754.314 - 12804.726: 59.5912% ( 105) 00:09:22.313 12804.726 - 12855.138: 60.6918% ( 112) 00:09:22.313 12855.138 - 12905.551: 61.9792% ( 131) 00:09:22.313 12905.551 - 13006.375: 64.7406% ( 281) 00:09:22.313 13006.375 - 13107.200: 67.2268% ( 253) 00:09:22.313 13107.200 - 13208.025: 69.4870% ( 230) 00:09:22.313 13208.025 - 13308.849: 71.9929% ( 255) 00:09:22.313 13308.849 - 13409.674: 74.3416% ( 239) 00:09:22.313 13409.674 - 13510.498: 76.5527% ( 225) 00:09:22.313 13510.498 - 13611.323: 78.3608% ( 184) 00:09:22.313 13611.323 - 13712.148: 80.2476% ( 192) 00:09:22.313 13712.148 - 13812.972: 82.1443% ( 193) 00:09:22.313 13812.972 - 13913.797: 83.6380% ( 152) 00:09:22.313 13913.797 - 14014.622: 84.9548% ( 134) 00:09:22.313 14014.622 - 14115.446: 86.1340% ( 120) 00:09:22.313 14115.446 - 14216.271: 87.1364% ( 102) 00:09:22.313 14216.271 - 14317.095: 88.4729% ( 136) 00:09:22.313 14317.095 - 14417.920: 89.2590% ( 80) 00:09:22.313 14417.920 - 14518.745: 90.0157% ( 77) 00:09:22.313 14518.745 - 14619.569: 90.6152% ( 61) 00:09:22.313 14619.569 - 14720.394: 91.2343% ( 63) 00:09:22.313 14720.394 - 14821.218: 91.8337% ( 61) 00:09:22.314 14821.218 - 14922.043: 92.5904% ( 77) 00:09:22.314 14922.043 - 15022.868: 93.1899% ( 61) 00:09:22.314 15022.868 - 15123.692: 93.8581% ( 68) 00:09:22.314 15123.692 - 15224.517: 94.7425% ( 90) 00:09:22.314 15224.517 - 15325.342: 95.2732% ( 54) 00:09:22.314 15325.342 - 15426.166: 95.6761% ( 41) 00:09:22.314 15426.166 - 15526.991: 95.9218% ( 25) 00:09:22.314 15526.991 - 15627.815: 96.0987% ( 18) 00:09:22.314 15627.815 - 15728.640: 96.3836% ( 29) 00:09:22.314 15728.640 - 15829.465: 96.5802% ( 20) 00:09:22.314 15829.465 - 15930.289: 96.7866% ( 21) 00:09:22.314 15930.289 - 16031.114: 97.1502% ( 37) 00:09:22.314 16031.114 - 16131.938: 97.3270% ( 18) 00:09:22.314 16131.938 - 16232.763: 97.5531% ( 23) 00:09:22.314 16232.763 - 16333.588: 97.7594% ( 21) 00:09:22.314 16333.588 - 16434.412: 97.9167% ( 16) 00:09:22.314 16434.412 - 16535.237: 98.0739% ( 16) 00:09:22.314 16535.237 - 16636.062: 98.1918% ( 12) 00:09:22.314 16636.062 - 16736.886: 98.5653% ( 38) 00:09:22.314 16736.886 - 16837.711: 98.7127% ( 15) 00:09:22.314 16837.711 - 16938.535: 98.7421% ( 3) 00:09:22.314 21072.345 - 21173.169: 98.7716% ( 3) 00:09:22.314 21173.169 - 21273.994: 98.8208% ( 5) 00:09:22.314 21273.994 - 21374.818: 98.8601% ( 4) 00:09:22.314 21374.818 - 21475.643: 98.8994% ( 4) 00:09:22.314 21475.643 - 21576.468: 98.9387% ( 4) 00:09:22.314 21576.468 - 21677.292: 98.9878% ( 5) 00:09:22.314 21677.292 - 21778.117: 99.0173% ( 3) 00:09:22.314 21778.117 - 21878.942: 99.0664% ( 5) 00:09:22.314 21878.942 - 21979.766: 99.1156% ( 5) 00:09:22.314 21979.766 - 22080.591: 99.1647% ( 5) 00:09:22.314 22080.591 - 22181.415: 99.2040% ( 4) 00:09:22.314 22181.415 - 22282.240: 99.2531% ( 5) 00:09:22.314 22282.240 - 22383.065: 99.3023% ( 5) 00:09:22.314 22383.065 - 22483.889: 99.3514% ( 5) 00:09:22.314 22483.889 - 22584.714: 99.3711% ( 2) 00:09:22.314 28029.243 - 28230.892: 99.3809% ( 1) 00:09:22.314 28230.892 - 28432.542: 99.4792% ( 10) 00:09:22.314 28432.542 - 28634.191: 99.5873% ( 11) 00:09:22.314 28634.191 - 28835.840: 99.6954% ( 11) 00:09:22.314 28835.840 - 29037.489: 99.8035% ( 11) 00:09:22.314 29037.489 - 29239.138: 99.8919% ( 9) 00:09:22.314 29239.138 - 29440.788: 99.9902% ( 10) 00:09:22.314 29440.788 - 29642.437: 100.0000% ( 1) 00:09:22.314 00:09:22.314 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:22.314 ============================================================================== 00:09:22.314 Range in us Cumulative IO count 00:09:22.314 5772.209 - 5797.415: 0.0295% ( 3) 00:09:22.314 5797.415 - 5822.622: 0.0688% ( 4) 00:09:22.314 5822.622 - 5847.828: 0.0983% ( 3) 00:09:22.314 5847.828 - 5873.034: 0.1965% ( 10) 00:09:22.314 5873.034 - 5898.240: 0.2457% ( 5) 00:09:22.314 5898.240 - 5923.446: 0.3145% ( 7) 00:09:22.314 5923.446 - 5948.652: 0.3538% ( 4) 00:09:22.314 5948.652 - 5973.858: 0.3734% ( 2) 00:09:22.314 5973.858 - 5999.065: 0.4029% ( 3) 00:09:22.314 5999.065 - 6024.271: 0.4127% ( 1) 00:09:22.314 6024.271 - 6049.477: 0.4422% ( 3) 00:09:22.314 6049.477 - 6074.683: 0.4520% ( 1) 00:09:22.314 6074.683 - 6099.889: 0.4815% ( 3) 00:09:22.314 6099.889 - 6125.095: 0.5012% ( 2) 00:09:22.314 6125.095 - 6150.302: 0.5208% ( 2) 00:09:22.314 6150.302 - 6175.508: 0.5405% ( 2) 00:09:22.314 6175.508 - 6200.714: 0.5503% ( 1) 00:09:22.314 6200.714 - 6225.920: 0.5798% ( 3) 00:09:22.314 6225.920 - 6251.126: 0.5994% ( 2) 00:09:22.314 6251.126 - 6276.332: 0.6191% ( 2) 00:09:22.314 6276.332 - 6301.538: 0.6289% ( 1) 00:09:22.314 9275.865 - 9326.277: 0.6486% ( 2) 00:09:22.314 9326.277 - 9376.689: 0.6879% ( 4) 00:09:22.314 9376.689 - 9427.102: 0.7665% ( 8) 00:09:22.314 9427.102 - 9477.514: 0.8550% ( 9) 00:09:22.314 9477.514 - 9527.926: 1.0515% ( 20) 00:09:22.314 9527.926 - 9578.338: 1.3561% ( 31) 00:09:22.314 9578.338 - 9628.751: 1.6411% ( 29) 00:09:22.314 9628.751 - 9679.163: 1.9458% ( 31) 00:09:22.314 9679.163 - 9729.575: 2.2013% ( 26) 00:09:22.314 9729.575 - 9779.988: 2.5354% ( 34) 00:09:22.314 9779.988 - 9830.400: 2.9383% ( 41) 00:09:22.314 9830.400 - 9880.812: 3.2429% ( 31) 00:09:22.314 9880.812 - 9931.225: 3.5279% ( 29) 00:09:22.314 9931.225 - 9981.637: 3.9210% ( 40) 00:09:22.314 9981.637 - 10032.049: 4.3534% ( 44) 00:09:22.314 10032.049 - 10082.462: 4.6875% ( 34) 00:09:22.314 10082.462 - 10132.874: 5.0511% ( 37) 00:09:22.314 10132.874 - 10183.286: 5.4049% ( 36) 00:09:22.314 10183.286 - 10233.698: 5.7685% ( 37) 00:09:22.314 10233.698 - 10284.111: 6.1714% ( 41) 00:09:22.314 10284.111 - 10334.523: 6.5546% ( 39) 00:09:22.314 10334.523 - 10384.935: 6.8691% ( 32) 00:09:22.314 10384.935 - 10435.348: 7.2720% ( 41) 00:09:22.314 10435.348 - 10485.760: 7.6651% ( 40) 00:09:22.314 10485.760 - 10536.172: 8.3530% ( 70) 00:09:22.314 10536.172 - 10586.585: 9.0507% ( 71) 00:09:22.314 10586.585 - 10636.997: 9.6796% ( 64) 00:09:22.314 10636.997 - 10687.409: 10.4265% ( 76) 00:09:22.314 10687.409 - 10737.822: 11.1635% ( 75) 00:09:22.314 10737.822 - 10788.234: 12.1561% ( 101) 00:09:22.314 10788.234 - 10838.646: 13.0405% ( 90) 00:09:22.314 10838.646 - 10889.058: 13.7579% ( 73) 00:09:22.314 10889.058 - 10939.471: 14.3966% ( 65) 00:09:22.314 10939.471 - 10989.883: 15.2516% ( 87) 00:09:22.314 10989.883 - 11040.295: 16.1065% ( 87) 00:09:22.314 11040.295 - 11090.708: 16.9320% ( 84) 00:09:22.314 11090.708 - 11141.120: 17.8754% ( 96) 00:09:22.314 11141.120 - 11191.532: 18.9269% ( 107) 00:09:22.314 11191.532 - 11241.945: 20.4796% ( 158) 00:09:22.314 11241.945 - 11292.357: 21.6785% ( 122) 00:09:22.314 11292.357 - 11342.769: 23.0248% ( 137) 00:09:22.314 11342.769 - 11393.182: 24.5283% ( 153) 00:09:22.314 11393.182 - 11443.594: 26.0318% ( 153) 00:09:22.314 11443.594 - 11494.006: 27.5649% ( 156) 00:09:22.314 11494.006 - 11544.418: 28.9701% ( 143) 00:09:22.314 11544.418 - 11594.831: 30.3852% ( 144) 00:09:22.314 11594.831 - 11645.243: 31.8298% ( 147) 00:09:22.314 11645.243 - 11695.655: 33.1466% ( 134) 00:09:22.314 11695.655 - 11746.068: 34.9941% ( 188) 00:09:22.314 11746.068 - 11796.480: 36.3502% ( 138) 00:09:22.314 11796.480 - 11846.892: 37.6965% ( 137) 00:09:22.314 11846.892 - 11897.305: 38.8168% ( 114) 00:09:22.314 11897.305 - 11947.717: 40.0354% ( 124) 00:09:22.314 11947.717 - 11998.129: 41.2146% ( 120) 00:09:22.314 11998.129 - 12048.542: 42.4725% ( 128) 00:09:22.314 12048.542 - 12098.954: 43.5829% ( 113) 00:09:22.314 12098.954 - 12149.366: 44.8408% ( 128) 00:09:22.314 12149.366 - 12199.778: 45.7645% ( 94) 00:09:22.314 12199.778 - 12250.191: 46.9045% ( 116) 00:09:22.314 12250.191 - 12300.603: 48.2115% ( 133) 00:09:22.314 12300.603 - 12351.015: 49.2433% ( 105) 00:09:22.314 12351.015 - 12401.428: 50.2948% ( 107) 00:09:22.314 12401.428 - 12451.840: 51.2382% ( 96) 00:09:22.314 12451.840 - 12502.252: 52.3683% ( 115) 00:09:22.314 12502.252 - 12552.665: 53.2921% ( 94) 00:09:22.314 12552.665 - 12603.077: 54.4910% ( 122) 00:09:22.314 12603.077 - 12653.489: 55.5326% ( 106) 00:09:22.314 12653.489 - 12703.902: 56.6333% ( 112) 00:09:22.314 12703.902 - 12754.314: 58.0582% ( 145) 00:09:22.314 12754.314 - 12804.726: 59.4045% ( 137) 00:09:22.314 12804.726 - 12855.138: 60.6427% ( 126) 00:09:22.314 12855.138 - 12905.551: 61.9399% ( 132) 00:09:22.314 12905.551 - 13006.375: 64.7406% ( 285) 00:09:22.314 13006.375 - 13107.200: 67.1580% ( 246) 00:09:22.314 13107.200 - 13208.025: 69.3396% ( 222) 00:09:22.314 13208.025 - 13308.849: 71.5114% ( 221) 00:09:22.314 13308.849 - 13409.674: 73.6242% ( 215) 00:09:22.314 13409.674 - 13510.498: 75.2162% ( 162) 00:09:22.314 13510.498 - 13611.323: 76.9163% ( 173) 00:09:22.314 13611.323 - 13712.148: 78.4002% ( 151) 00:09:22.314 13712.148 - 13812.972: 79.8153% ( 144) 00:09:22.314 13812.972 - 13913.797: 81.6627% ( 188) 00:09:22.314 13913.797 - 14014.622: 83.6085% ( 198) 00:09:22.314 14014.622 - 14115.446: 85.2103% ( 163) 00:09:22.314 14115.446 - 14216.271: 86.9104% ( 173) 00:09:22.314 14216.271 - 14317.095: 88.3156% ( 143) 00:09:22.314 14317.095 - 14417.920: 89.5047% ( 121) 00:09:22.314 14417.920 - 14518.745: 90.7331% ( 125) 00:09:22.314 14518.745 - 14619.569: 91.8436% ( 113) 00:09:22.314 14619.569 - 14720.394: 92.5708% ( 74) 00:09:22.314 14720.394 - 14821.218: 93.3471% ( 79) 00:09:22.314 14821.218 - 14922.043: 93.9465% ( 61) 00:09:22.314 14922.043 - 15022.868: 94.4477% ( 51) 00:09:22.314 15022.868 - 15123.692: 94.7917% ( 35) 00:09:22.314 15123.692 - 15224.517: 95.2535% ( 47) 00:09:22.314 15224.517 - 15325.342: 95.5778% ( 33) 00:09:22.314 15325.342 - 15426.166: 95.8333% ( 26) 00:09:22.314 15426.166 - 15526.991: 96.0987% ( 27) 00:09:22.314 15526.991 - 15627.815: 96.2854% ( 19) 00:09:22.314 15627.815 - 15728.640: 96.4623% ( 18) 00:09:22.314 15728.640 - 15829.465: 96.9241% ( 47) 00:09:22.314 15829.465 - 15930.289: 97.2386% ( 32) 00:09:22.314 15930.289 - 16031.114: 97.5039% ( 27) 00:09:22.314 16031.114 - 16131.938: 97.8184% ( 32) 00:09:22.314 16131.938 - 16232.763: 97.9560% ( 14) 00:09:22.314 16232.763 - 16333.588: 98.0149% ( 6) 00:09:22.314 16333.588 - 16434.412: 98.1329% ( 12) 00:09:22.314 16434.412 - 16535.237: 98.2311% ( 10) 00:09:22.314 16535.237 - 16636.062: 98.3294% ( 10) 00:09:22.314 16636.062 - 16736.886: 98.4178% ( 9) 00:09:22.314 16736.886 - 16837.711: 98.5653% ( 15) 00:09:22.314 16837.711 - 16938.535: 98.6144% ( 5) 00:09:22.314 16938.535 - 17039.360: 98.6832% ( 7) 00:09:22.314 17039.360 - 17140.185: 98.7323% ( 5) 00:09:22.314 17140.185 - 17241.009: 98.7421% ( 1) 00:09:22.314 21173.169 - 21273.994: 98.7716% ( 3) 00:09:22.314 21273.994 - 21374.818: 98.8306% ( 6) 00:09:22.314 21374.818 - 21475.643: 98.8895% ( 6) 00:09:22.314 21475.643 - 21576.468: 98.9485% ( 6) 00:09:22.314 21576.468 - 21677.292: 99.0075% ( 6) 00:09:22.314 21677.292 - 21778.117: 99.0664% ( 6) 00:09:22.315 21778.117 - 21878.942: 99.1156% ( 5) 00:09:22.315 21878.942 - 21979.766: 99.1647% ( 5) 00:09:22.315 21979.766 - 22080.591: 99.2138% ( 5) 00:09:22.315 22080.591 - 22181.415: 99.2728% ( 6) 00:09:22.315 22181.415 - 22282.240: 99.3318% ( 6) 00:09:22.315 22282.240 - 22383.065: 99.3711% ( 4) 00:09:22.315 28230.892 - 28432.542: 99.4693% ( 10) 00:09:22.315 28432.542 - 28634.191: 99.5676% ( 10) 00:09:22.315 28634.191 - 28835.840: 99.6757% ( 11) 00:09:22.315 28835.840 - 29037.489: 99.7740% ( 10) 00:09:22.315 29037.489 - 29239.138: 99.8722% ( 10) 00:09:22.315 29239.138 - 29440.788: 99.9705% ( 10) 00:09:22.315 29440.788 - 29642.437: 100.0000% ( 3) 00:09:22.315 00:09:22.315 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:22.315 ============================================================================== 00:09:22.315 Range in us Cumulative IO count 00:09:22.315 4663.138 - 4688.345: 0.0098% ( 1) 00:09:22.315 4839.582 - 4864.788: 0.0684% ( 6) 00:09:22.315 4864.788 - 4889.994: 0.1172% ( 5) 00:09:22.315 4889.994 - 4915.200: 0.1562% ( 4) 00:09:22.315 4915.200 - 4940.406: 0.2148% ( 6) 00:09:22.315 4940.406 - 4965.612: 0.2344% ( 2) 00:09:22.315 4965.612 - 4990.818: 0.2734% ( 4) 00:09:22.315 4990.818 - 5016.025: 0.3125% ( 4) 00:09:22.315 5016.025 - 5041.231: 0.3320% ( 2) 00:09:22.315 5041.231 - 5066.437: 0.3516% ( 2) 00:09:22.315 5066.437 - 5091.643: 0.3711% ( 2) 00:09:22.315 5091.643 - 5116.849: 0.3809% ( 1) 00:09:22.315 5116.849 - 5142.055: 0.4004% ( 2) 00:09:22.315 5142.055 - 5167.262: 0.4102% ( 1) 00:09:22.315 5167.262 - 5192.468: 0.4297% ( 2) 00:09:22.315 5192.468 - 5217.674: 0.4395% ( 1) 00:09:22.315 5217.674 - 5242.880: 0.4492% ( 1) 00:09:22.315 5242.880 - 5268.086: 0.4688% ( 2) 00:09:22.315 5268.086 - 5293.292: 0.4785% ( 1) 00:09:22.315 5293.292 - 5318.498: 0.4883% ( 1) 00:09:22.315 5318.498 - 5343.705: 0.5078% ( 2) 00:09:22.315 5343.705 - 5368.911: 0.5176% ( 1) 00:09:22.315 5368.911 - 5394.117: 0.5371% ( 2) 00:09:22.315 5394.117 - 5419.323: 0.5469% ( 1) 00:09:22.315 5419.323 - 5444.529: 0.5566% ( 1) 00:09:22.315 5444.529 - 5469.735: 0.5762% ( 2) 00:09:22.315 5469.735 - 5494.942: 0.5957% ( 2) 00:09:22.315 5494.942 - 5520.148: 0.6152% ( 2) 00:09:22.315 5520.148 - 5545.354: 0.6250% ( 1) 00:09:22.315 8922.978 - 8973.391: 0.6445% ( 2) 00:09:22.315 8973.391 - 9023.803: 0.6934% ( 5) 00:09:22.315 9023.803 - 9074.215: 0.7324% ( 4) 00:09:22.315 9074.215 - 9124.628: 0.7812% ( 5) 00:09:22.315 9124.628 - 9175.040: 0.8203% ( 4) 00:09:22.315 9175.040 - 9225.452: 0.8887% ( 7) 00:09:22.315 9225.452 - 9275.865: 0.9570% ( 7) 00:09:22.315 9275.865 - 9326.277: 1.0352% ( 8) 00:09:22.315 9326.277 - 9376.689: 1.0645% ( 3) 00:09:22.315 9376.689 - 9427.102: 1.1230% ( 6) 00:09:22.315 9427.102 - 9477.514: 1.1914% ( 7) 00:09:22.315 9477.514 - 9527.926: 1.2500% ( 6) 00:09:22.315 9527.926 - 9578.338: 1.3770% ( 13) 00:09:22.315 9578.338 - 9628.751: 1.6113% ( 24) 00:09:22.315 9628.751 - 9679.163: 1.9141% ( 31) 00:09:22.315 9679.163 - 9729.575: 2.2070% ( 30) 00:09:22.315 9729.575 - 9779.988: 2.5293% ( 33) 00:09:22.315 9779.988 - 9830.400: 2.9004% ( 38) 00:09:22.315 9830.400 - 9880.812: 3.5742% ( 69) 00:09:22.315 9880.812 - 9931.225: 4.1504% ( 59) 00:09:22.315 9931.225 - 9981.637: 4.5898% ( 45) 00:09:22.315 9981.637 - 10032.049: 4.9609% ( 38) 00:09:22.315 10032.049 - 10082.462: 5.2148% ( 26) 00:09:22.315 10082.462 - 10132.874: 5.3809% ( 17) 00:09:22.315 10132.874 - 10183.286: 5.5176% ( 14) 00:09:22.315 10183.286 - 10233.698: 5.5859% ( 7) 00:09:22.315 10233.698 - 10284.111: 5.6934% ( 11) 00:09:22.315 10284.111 - 10334.523: 5.8887% ( 20) 00:09:22.315 10334.523 - 10384.935: 6.0742% ( 19) 00:09:22.315 10384.935 - 10435.348: 6.3867% ( 32) 00:09:22.315 10435.348 - 10485.760: 6.8945% ( 52) 00:09:22.315 10485.760 - 10536.172: 7.5977% ( 72) 00:09:22.315 10536.172 - 10586.585: 8.2617% ( 68) 00:09:22.315 10586.585 - 10636.997: 9.3457% ( 111) 00:09:22.315 10636.997 - 10687.409: 10.1758% ( 85) 00:09:22.315 10687.409 - 10737.822: 10.8789% ( 72) 00:09:22.315 10737.822 - 10788.234: 11.9629% ( 111) 00:09:22.315 10788.234 - 10838.646: 12.8711% ( 93) 00:09:22.315 10838.646 - 10889.058: 14.1504% ( 131) 00:09:22.315 10889.058 - 10939.471: 15.2539% ( 113) 00:09:22.315 10939.471 - 10989.883: 16.4746% ( 125) 00:09:22.315 10989.883 - 11040.295: 17.7539% ( 131) 00:09:22.315 11040.295 - 11090.708: 18.8672% ( 114) 00:09:22.315 11090.708 - 11141.120: 20.0391% ( 120) 00:09:22.315 11141.120 - 11191.532: 21.2598% ( 125) 00:09:22.315 11191.532 - 11241.945: 22.5586% ( 133) 00:09:22.315 11241.945 - 11292.357: 23.8086% ( 128) 00:09:22.315 11292.357 - 11342.769: 25.0488% ( 127) 00:09:22.315 11342.769 - 11393.182: 26.2695% ( 125) 00:09:22.315 11393.182 - 11443.594: 27.5391% ( 130) 00:09:22.315 11443.594 - 11494.006: 28.9551% ( 145) 00:09:22.315 11494.006 - 11544.418: 30.0195% ( 109) 00:09:22.315 11544.418 - 11594.831: 30.9082% ( 91) 00:09:22.315 11594.831 - 11645.243: 31.7383% ( 85) 00:09:22.315 11645.243 - 11695.655: 32.7246% ( 101) 00:09:22.315 11695.655 - 11746.068: 33.9941% ( 130) 00:09:22.315 11746.068 - 11796.480: 34.9316% ( 96) 00:09:22.315 11796.480 - 11846.892: 36.0645% ( 116) 00:09:22.315 11846.892 - 11897.305: 36.9727% ( 93) 00:09:22.315 11897.305 - 11947.717: 38.0273% ( 108) 00:09:22.315 11947.717 - 11998.129: 39.0820% ( 108) 00:09:22.315 11998.129 - 12048.542: 40.3027% ( 125) 00:09:22.315 12048.542 - 12098.954: 41.2305% ( 95) 00:09:22.315 12098.954 - 12149.366: 42.1387% ( 93) 00:09:22.315 12149.366 - 12199.778: 43.1250% ( 101) 00:09:22.315 12199.778 - 12250.191: 44.2285% ( 113) 00:09:22.315 12250.191 - 12300.603: 45.3711% ( 117) 00:09:22.315 12300.603 - 12351.015: 46.6504% ( 131) 00:09:22.315 12351.015 - 12401.428: 47.8516% ( 123) 00:09:22.315 12401.428 - 12451.840: 49.3359% ( 152) 00:09:22.315 12451.840 - 12502.252: 50.6055% ( 130) 00:09:22.315 12502.252 - 12552.665: 52.1094% ( 154) 00:09:22.315 12552.665 - 12603.077: 53.3008% ( 122) 00:09:22.315 12603.077 - 12653.489: 54.5605% ( 129) 00:09:22.315 12653.489 - 12703.902: 56.2598% ( 174) 00:09:22.315 12703.902 - 12754.314: 57.5000% ( 127) 00:09:22.315 12754.314 - 12804.726: 58.7012% ( 123) 00:09:22.315 12804.726 - 12855.138: 59.9414% ( 127) 00:09:22.315 12855.138 - 12905.551: 61.0742% ( 116) 00:09:22.315 12905.551 - 13006.375: 63.7793% ( 277) 00:09:22.315 13006.375 - 13107.200: 66.6406% ( 293) 00:09:22.315 13107.200 - 13208.025: 69.2578% ( 268) 00:09:22.315 13208.025 - 13308.849: 71.3086% ( 210) 00:09:22.315 13308.849 - 13409.674: 73.5254% ( 227) 00:09:22.315 13409.674 - 13510.498: 75.6543% ( 218) 00:09:22.315 13510.498 - 13611.323: 77.9102% ( 231) 00:09:22.315 13611.323 - 13712.148: 80.2539% ( 240) 00:09:22.315 13712.148 - 13812.972: 82.2266% ( 202) 00:09:22.315 13812.972 - 13913.797: 83.9746% ( 179) 00:09:22.315 13913.797 - 14014.622: 85.2539% ( 131) 00:09:22.315 14014.622 - 14115.446: 86.4258% ( 120) 00:09:22.315 14115.446 - 14216.271: 87.3145% ( 91) 00:09:22.315 14216.271 - 14317.095: 88.1934% ( 90) 00:09:22.315 14317.095 - 14417.920: 89.2383% ( 107) 00:09:22.315 14417.920 - 14518.745: 90.3223% ( 111) 00:09:22.315 14518.745 - 14619.569: 91.2891% ( 99) 00:09:22.315 14619.569 - 14720.394: 91.9922% ( 72) 00:09:22.315 14720.394 - 14821.218: 92.7539% ( 78) 00:09:22.315 14821.218 - 14922.043: 93.6328% ( 90) 00:09:22.315 14922.043 - 15022.868: 94.4434% ( 83) 00:09:22.315 15022.868 - 15123.692: 95.2051% ( 78) 00:09:22.315 15123.692 - 15224.517: 95.7031% ( 51) 00:09:22.315 15224.517 - 15325.342: 96.0938% ( 40) 00:09:22.315 15325.342 - 15426.166: 96.5039% ( 42) 00:09:22.315 15426.166 - 15526.991: 96.8164% ( 32) 00:09:22.315 15526.991 - 15627.815: 97.0215% ( 21) 00:09:22.315 15627.815 - 15728.640: 97.1973% ( 18) 00:09:22.315 15728.640 - 15829.465: 97.4121% ( 22) 00:09:22.315 15829.465 - 15930.289: 97.6270% ( 22) 00:09:22.315 15930.289 - 16031.114: 98.0176% ( 40) 00:09:22.315 16031.114 - 16131.938: 98.2715% ( 26) 00:09:22.315 16131.938 - 16232.763: 98.5156% ( 25) 00:09:22.315 16232.763 - 16333.588: 98.7109% ( 20) 00:09:22.315 16333.588 - 16434.412: 98.9551% ( 25) 00:09:22.315 16434.412 - 16535.237: 99.1113% ( 16) 00:09:22.316 16535.237 - 16636.062: 99.2285% ( 12) 00:09:22.316 16636.062 - 16736.886: 99.3164% ( 9) 00:09:22.316 16736.886 - 16837.711: 99.3652% ( 5) 00:09:22.316 16837.711 - 16938.535: 99.3750% ( 1) 00:09:22.316 21374.818 - 21475.643: 99.4238% ( 5) 00:09:22.316 21475.643 - 21576.468: 99.4824% ( 6) 00:09:22.316 21576.468 - 21677.292: 99.5410% ( 6) 00:09:22.316 21677.292 - 21778.117: 99.5996% ( 6) 00:09:22.316 21778.117 - 21878.942: 99.6582% ( 6) 00:09:22.316 21878.942 - 21979.766: 99.7168% ( 6) 00:09:22.316 21979.766 - 22080.591: 99.7754% ( 6) 00:09:22.316 22080.591 - 22181.415: 99.8340% ( 6) 00:09:22.316 22181.415 - 22282.240: 99.8926% ( 6) 00:09:22.316 22282.240 - 22383.065: 99.9512% ( 6) 00:09:22.316 22383.065 - 22483.889: 100.0000% ( 5) 00:09:22.316 00:09:22.316 11:04:50 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:22.316 00:09:22.316 real 0m2.450s 00:09:22.316 user 0m2.159s 00:09:22.316 sys 0m0.176s 00:09:22.316 11:04:50 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.316 11:04:50 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:22.316 ************************************ 00:09:22.316 END TEST nvme_perf 00:09:22.316 ************************************ 00:09:22.316 11:04:50 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:22.316 11:04:50 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:22.316 11:04:50 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.316 11:04:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.316 ************************************ 00:09:22.316 START TEST nvme_hello_world 00:09:22.316 ************************************ 00:09:22.316 11:04:50 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:22.316 Initializing NVMe Controllers 00:09:22.316 Attached to 0000:00:13.0 00:09:22.316 Namespace ID: 1 size: 1GB 00:09:22.316 Attached to 0000:00:10.0 00:09:22.316 Namespace ID: 1 size: 6GB 00:09:22.316 Attached to 0000:00:11.0 00:09:22.316 Namespace ID: 1 size: 5GB 00:09:22.316 Attached to 0000:00:12.0 00:09:22.316 Namespace ID: 1 size: 4GB 00:09:22.316 Namespace ID: 2 size: 4GB 00:09:22.316 Namespace ID: 3 size: 4GB 00:09:22.316 Initialization complete. 00:09:22.316 INFO: using host memory buffer for IO 00:09:22.316 Hello world! 00:09:22.316 INFO: using host memory buffer for IO 00:09:22.316 Hello world! 00:09:22.316 INFO: using host memory buffer for IO 00:09:22.316 Hello world! 00:09:22.316 INFO: using host memory buffer for IO 00:09:22.316 Hello world! 00:09:22.316 INFO: using host memory buffer for IO 00:09:22.316 Hello world! 00:09:22.316 INFO: using host memory buffer for IO 00:09:22.316 Hello world! 00:09:22.316 00:09:22.316 real 0m0.197s 00:09:22.316 user 0m0.071s 00:09:22.316 sys 0m0.076s 00:09:22.316 11:04:51 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.316 11:04:51 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:22.316 ************************************ 00:09:22.316 END TEST nvme_hello_world 00:09:22.316 ************************************ 00:09:22.316 11:04:51 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:22.316 11:04:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.316 11:04:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.316 11:04:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.316 ************************************ 00:09:22.316 START TEST nvme_sgl 00:09:22.316 ************************************ 00:09:22.316 11:04:51 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:22.577 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:22.577 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:22.577 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:22.577 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:22.577 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:22.577 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:22.577 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:22.577 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:22.577 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:22.577 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:22.577 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:22.577 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:22.577 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:22.577 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:22.577 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:22.578 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:22.578 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:22.578 NVMe Readv/Writev Request test 00:09:22.578 Attached to 0000:00:13.0 00:09:22.578 Attached to 0000:00:10.0 00:09:22.578 Attached to 0000:00:11.0 00:09:22.578 Attached to 0000:00:12.0 00:09:22.578 0000:00:10.0: build_io_request_2 test passed 00:09:22.578 0000:00:10.0: build_io_request_4 test passed 00:09:22.578 0000:00:10.0: build_io_request_5 test passed 00:09:22.578 0000:00:10.0: build_io_request_6 test passed 00:09:22.578 0000:00:10.0: build_io_request_7 test passed 00:09:22.578 0000:00:10.0: build_io_request_10 test passed 00:09:22.578 0000:00:11.0: build_io_request_2 test passed 00:09:22.578 0000:00:11.0: build_io_request_4 test passed 00:09:22.578 0000:00:11.0: build_io_request_5 test passed 00:09:22.578 0000:00:11.0: build_io_request_6 test passed 00:09:22.578 0000:00:11.0: build_io_request_7 test passed 00:09:22.578 0000:00:11.0: build_io_request_10 test passed 00:09:22.578 Cleaning up... 00:09:22.578 00:09:22.578 real 0m0.241s 00:09:22.578 user 0m0.120s 00:09:22.578 sys 0m0.082s 00:09:22.578 11:04:51 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.578 11:04:51 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:22.578 ************************************ 00:09:22.578 END TEST nvme_sgl 00:09:22.578 ************************************ 00:09:22.578 11:04:51 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:22.578 11:04:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.578 11:04:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.578 11:04:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.578 ************************************ 00:09:22.578 START TEST nvme_e2edp 00:09:22.578 ************************************ 00:09:22.578 11:04:51 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:22.836 NVMe Write/Read with End-to-End data protection test 00:09:22.836 Attached to 0000:00:13.0 00:09:22.836 Attached to 0000:00:10.0 00:09:22.836 Attached to 0000:00:11.0 00:09:22.836 Attached to 0000:00:12.0 00:09:22.836 Cleaning up... 00:09:22.836 00:09:22.836 real 0m0.238s 00:09:22.836 user 0m0.093s 00:09:22.836 sys 0m0.094s 00:09:22.836 11:04:51 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.836 11:04:51 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:22.836 ************************************ 00:09:22.836 END TEST nvme_e2edp 00:09:22.836 ************************************ 00:09:22.836 11:04:51 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:22.836 11:04:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.836 11:04:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.836 11:04:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.836 ************************************ 00:09:22.836 START TEST nvme_reserve 00:09:22.836 ************************************ 00:09:22.836 11:04:51 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:23.095 ===================================================== 00:09:23.095 NVMe Controller at PCI bus 0, device 19, function 0 00:09:23.095 ===================================================== 00:09:23.095 Reservations: Not Supported 00:09:23.095 ===================================================== 00:09:23.095 NVMe Controller at PCI bus 0, device 16, function 0 00:09:23.095 ===================================================== 00:09:23.095 Reservations: Not Supported 00:09:23.095 ===================================================== 00:09:23.095 NVMe Controller at PCI bus 0, device 17, function 0 00:09:23.095 ===================================================== 00:09:23.095 Reservations: Not Supported 00:09:23.095 ===================================================== 00:09:23.095 NVMe Controller at PCI bus 0, device 18, function 0 00:09:23.095 ===================================================== 00:09:23.095 Reservations: Not Supported 00:09:23.095 Reservation test passed 00:09:23.095 00:09:23.095 real 0m0.230s 00:09:23.095 user 0m0.087s 00:09:23.095 sys 0m0.099s 00:09:23.095 ************************************ 00:09:23.095 END TEST nvme_reserve 00:09:23.095 ************************************ 00:09:23.095 11:04:51 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.095 11:04:51 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:23.095 11:04:51 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:23.095 11:04:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.095 11:04:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.095 11:04:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.095 ************************************ 00:09:23.095 START TEST nvme_err_injection 00:09:23.095 ************************************ 00:09:23.095 11:04:51 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:23.353 NVMe Error Injection test 00:09:23.353 Attached to 0000:00:13.0 00:09:23.353 Attached to 0000:00:10.0 00:09:23.353 Attached to 0000:00:11.0 00:09:23.353 Attached to 0000:00:12.0 00:09:23.353 0000:00:13.0: get features failed as expected 00:09:23.353 0000:00:10.0: get features failed as expected 00:09:23.353 0000:00:11.0: get features failed as expected 00:09:23.353 0000:00:12.0: get features failed as expected 00:09:23.353 0000:00:13.0: get features successfully as expected 00:09:23.353 0000:00:10.0: get features successfully as expected 00:09:23.353 0000:00:11.0: get features successfully as expected 00:09:23.353 0000:00:12.0: get features successfully as expected 00:09:23.353 0000:00:13.0: read failed as expected 00:09:23.353 0000:00:10.0: read failed as expected 00:09:23.353 0000:00:11.0: read failed as expected 00:09:23.353 0000:00:12.0: read failed as expected 00:09:23.353 0000:00:13.0: read successfully as expected 00:09:23.353 0000:00:10.0: read successfully as expected 00:09:23.353 0000:00:11.0: read successfully as expected 00:09:23.353 0000:00:12.0: read successfully as expected 00:09:23.353 Cleaning up... 00:09:23.353 00:09:23.353 real 0m0.207s 00:09:23.353 user 0m0.065s 00:09:23.353 sys 0m0.101s 00:09:23.353 ************************************ 00:09:23.353 END TEST nvme_err_injection 00:09:23.353 ************************************ 00:09:23.353 11:04:52 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.353 11:04:52 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:23.353 11:04:52 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:23.353 11:04:52 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:23.353 11:04:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.353 11:04:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.353 ************************************ 00:09:23.353 START TEST nvme_overhead 00:09:23.353 ************************************ 00:09:23.353 11:04:52 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:24.729 Initializing NVMe Controllers 00:09:24.729 Attached to 0000:00:13.0 00:09:24.729 Attached to 0000:00:10.0 00:09:24.729 Attached to 0000:00:11.0 00:09:24.729 Attached to 0000:00:12.0 00:09:24.729 Initialization complete. Launching workers. 00:09:24.729 submit (in ns) avg, min, max = 12030.6, 9766.9, 104803.8 00:09:24.729 complete (in ns) avg, min, max = 8122.8, 7220.0, 266541.5 00:09:24.729 00:09:24.729 Submit histogram 00:09:24.729 ================ 00:09:24.729 Range in us Cumulative Count 00:09:24.729 9.748 - 9.797: 0.0056% ( 1) 00:09:24.729 9.846 - 9.895: 0.0112% ( 1) 00:09:24.729 10.191 - 10.240: 0.0168% ( 1) 00:09:24.729 10.486 - 10.535: 0.0225% ( 1) 00:09:24.729 10.683 - 10.732: 0.0281% ( 1) 00:09:24.729 10.732 - 10.782: 0.0674% ( 7) 00:09:24.729 10.782 - 10.831: 0.4715% ( 72) 00:09:24.729 10.831 - 10.880: 2.5933% ( 378) 00:09:24.729 10.880 - 10.929: 8.4086% ( 1036) 00:09:24.729 10.929 - 10.978: 19.0738% ( 1900) 00:09:24.729 10.978 - 11.028: 32.0123% ( 2305) 00:09:24.729 11.028 - 11.077: 42.8740% ( 1935) 00:09:24.729 11.077 - 11.126: 50.4743% ( 1354) 00:09:24.729 11.126 - 11.175: 54.7516% ( 762) 00:09:24.729 11.175 - 11.225: 57.0755% ( 414) 00:09:24.729 11.225 - 11.274: 58.4620% ( 247) 00:09:24.729 11.274 - 11.323: 59.1917% ( 130) 00:09:24.729 11.323 - 11.372: 59.6913% ( 89) 00:09:24.729 11.372 - 11.422: 60.0112% ( 57) 00:09:24.729 11.422 - 11.471: 60.3649% ( 63) 00:09:24.729 11.471 - 11.520: 60.8588% ( 88) 00:09:24.729 11.520 - 11.569: 61.3640% ( 90) 00:09:24.729 11.569 - 11.618: 62.0601% ( 124) 00:09:24.729 11.618 - 11.668: 62.7224% ( 118) 00:09:24.729 11.668 - 11.717: 63.2725% ( 98) 00:09:24.729 11.717 - 11.766: 63.9461% ( 120) 00:09:24.729 11.766 - 11.815: 64.7488% ( 143) 00:09:24.729 11.815 - 11.865: 65.6918% ( 168) 00:09:24.729 11.865 - 11.914: 66.6742% ( 175) 00:09:24.729 11.914 - 11.963: 67.6789% ( 179) 00:09:24.729 11.963 - 12.012: 68.8633% ( 211) 00:09:24.729 12.012 - 12.062: 69.8176% ( 170) 00:09:24.729 12.062 - 12.111: 70.6034% ( 140) 00:09:24.729 12.111 - 12.160: 71.2377% ( 113) 00:09:24.729 12.160 - 12.209: 71.7261% ( 87) 00:09:24.729 12.209 - 12.258: 72.0123% ( 51) 00:09:24.729 12.258 - 12.308: 72.2930% ( 50) 00:09:24.729 12.308 - 12.357: 72.5175% ( 40) 00:09:24.729 12.357 - 12.406: 72.6691% ( 27) 00:09:24.729 12.406 - 12.455: 72.7982% ( 23) 00:09:24.729 12.455 - 12.505: 72.8543% ( 10) 00:09:24.729 12.505 - 12.554: 72.9554% ( 18) 00:09:24.729 12.554 - 12.603: 73.0003% ( 8) 00:09:24.729 12.603 - 12.702: 73.0901% ( 16) 00:09:24.729 12.702 - 12.800: 73.1799% ( 16) 00:09:24.729 12.800 - 12.898: 73.3427% ( 29) 00:09:24.729 12.898 - 12.997: 73.5672% ( 40) 00:09:24.729 12.997 - 13.095: 73.8030% ( 42) 00:09:24.729 13.095 - 13.194: 74.0219% ( 39) 00:09:24.729 13.194 - 13.292: 75.0716% ( 187) 00:09:24.729 13.292 - 13.391: 79.8316% ( 848) 00:09:24.729 13.391 - 13.489: 86.0511% ( 1108) 00:09:24.729 13.489 - 13.588: 90.1824% ( 736) 00:09:24.729 13.588 - 13.686: 92.3435% ( 385) 00:09:24.729 13.686 - 13.785: 93.3539% ( 180) 00:09:24.729 13.785 - 13.883: 93.8928% ( 96) 00:09:24.729 13.883 - 13.982: 94.2576% ( 65) 00:09:24.729 13.982 - 14.080: 94.5944% ( 60) 00:09:24.729 14.080 - 14.178: 94.9200% ( 58) 00:09:24.729 14.178 - 14.277: 95.1221% ( 36) 00:09:24.729 14.277 - 14.375: 95.3466% ( 40) 00:09:24.729 14.375 - 14.474: 95.5262% ( 32) 00:09:24.729 14.474 - 14.572: 95.6722% ( 26) 00:09:24.729 14.572 - 14.671: 95.7171% ( 8) 00:09:24.729 14.671 - 14.769: 95.8181% ( 18) 00:09:24.729 14.769 - 14.868: 95.8911% ( 13) 00:09:24.729 14.868 - 14.966: 96.0034% ( 20) 00:09:24.729 14.966 - 15.065: 96.1100% ( 19) 00:09:24.729 15.065 - 15.163: 96.2335% ( 22) 00:09:24.729 15.163 - 15.262: 96.3233% ( 16) 00:09:24.729 15.262 - 15.360: 96.4637% ( 25) 00:09:24.729 15.360 - 15.458: 96.6152% ( 27) 00:09:24.729 15.458 - 15.557: 96.7050% ( 16) 00:09:24.729 15.557 - 15.655: 96.8173% ( 20) 00:09:24.729 15.655 - 15.754: 96.9464% ( 23) 00:09:24.729 15.754 - 15.852: 97.0306% ( 15) 00:09:24.729 15.852 - 15.951: 97.0811% ( 9) 00:09:24.729 15.951 - 16.049: 97.1878% ( 19) 00:09:24.729 16.049 - 16.148: 97.2607% ( 13) 00:09:24.729 16.148 - 16.246: 97.3449% ( 15) 00:09:24.729 16.246 - 16.345: 97.4235% ( 14) 00:09:24.729 16.345 - 16.443: 97.4628% ( 7) 00:09:24.729 16.443 - 16.542: 97.5021% ( 7) 00:09:24.729 16.542 - 16.640: 97.5470% ( 8) 00:09:24.729 16.640 - 16.738: 97.6088% ( 11) 00:09:24.729 16.738 - 16.837: 97.6649% ( 10) 00:09:24.729 16.837 - 16.935: 97.7322% ( 12) 00:09:24.729 16.935 - 17.034: 97.7715% ( 7) 00:09:24.729 17.034 - 17.132: 97.8164% ( 8) 00:09:24.729 17.132 - 17.231: 97.8726% ( 10) 00:09:24.729 17.231 - 17.329: 97.9512% ( 14) 00:09:24.729 17.329 - 17.428: 98.0185% ( 12) 00:09:24.729 17.428 - 17.526: 98.1027% ( 15) 00:09:24.729 17.526 - 17.625: 98.1645% ( 11) 00:09:24.729 17.625 - 17.723: 98.2823% ( 21) 00:09:24.729 17.723 - 17.822: 98.3216% ( 7) 00:09:24.729 17.822 - 17.920: 98.3609% ( 7) 00:09:24.729 17.920 - 18.018: 98.3834% ( 4) 00:09:24.729 18.018 - 18.117: 98.4227% ( 7) 00:09:24.729 18.117 - 18.215: 98.4564% ( 6) 00:09:24.729 18.215 - 18.314: 98.4788% ( 4) 00:09:24.729 18.314 - 18.412: 98.5293% ( 9) 00:09:24.729 18.412 - 18.511: 98.5686% ( 7) 00:09:24.729 18.511 - 18.609: 98.6023% ( 6) 00:09:24.729 18.609 - 18.708: 98.6472% ( 8) 00:09:24.729 18.708 - 18.806: 98.6640% ( 3) 00:09:24.729 18.806 - 18.905: 98.6809% ( 3) 00:09:24.729 18.905 - 19.003: 98.7033% ( 4) 00:09:24.729 19.003 - 19.102: 98.7314% ( 5) 00:09:24.729 19.102 - 19.200: 98.7426% ( 2) 00:09:24.729 19.200 - 19.298: 98.7595% ( 3) 00:09:24.729 19.298 - 19.397: 98.7763% ( 3) 00:09:24.729 19.495 - 19.594: 98.7932% ( 3) 00:09:24.729 19.594 - 19.692: 98.8493% ( 10) 00:09:24.729 19.692 - 19.791: 98.9223% ( 13) 00:09:24.729 19.791 - 19.889: 99.0008% ( 14) 00:09:24.730 19.889 - 19.988: 99.0570% ( 10) 00:09:24.730 19.988 - 20.086: 99.0794% ( 4) 00:09:24.730 20.086 - 20.185: 99.1075% ( 5) 00:09:24.730 20.185 - 20.283: 99.1412% ( 6) 00:09:24.730 20.283 - 20.382: 99.2029% ( 11) 00:09:24.730 20.382 - 20.480: 99.2422% ( 7) 00:09:24.730 20.480 - 20.578: 99.2703% ( 5) 00:09:24.730 20.578 - 20.677: 99.2983% ( 5) 00:09:24.730 20.677 - 20.775: 99.3208% ( 4) 00:09:24.730 20.775 - 20.874: 99.3376% ( 3) 00:09:24.730 20.874 - 20.972: 99.3433% ( 1) 00:09:24.730 20.972 - 21.071: 99.3489% ( 1) 00:09:24.730 21.071 - 21.169: 99.3545% ( 1) 00:09:24.730 21.169 - 21.268: 99.3825% ( 5) 00:09:24.730 21.268 - 21.366: 99.4050% ( 4) 00:09:24.730 21.366 - 21.465: 99.4106% ( 1) 00:09:24.730 21.465 - 21.563: 99.4162% ( 1) 00:09:24.730 21.563 - 21.662: 99.4331% ( 3) 00:09:24.730 21.662 - 21.760: 99.4443% ( 2) 00:09:24.730 21.760 - 21.858: 99.4555% ( 2) 00:09:24.730 21.858 - 21.957: 99.4611% ( 1) 00:09:24.730 21.957 - 22.055: 99.4667% ( 1) 00:09:24.730 22.055 - 22.154: 99.4780% ( 2) 00:09:24.730 22.154 - 22.252: 99.4892% ( 2) 00:09:24.730 22.449 - 22.548: 99.4948% ( 1) 00:09:24.730 22.548 - 22.646: 99.5116% ( 3) 00:09:24.730 22.942 - 23.040: 99.5173% ( 1) 00:09:24.730 23.040 - 23.138: 99.5229% ( 1) 00:09:24.730 23.532 - 23.631: 99.5285% ( 1) 00:09:24.730 23.631 - 23.729: 99.5341% ( 1) 00:09:24.730 23.828 - 23.926: 99.5453% ( 2) 00:09:24.730 24.222 - 24.320: 99.5846% ( 7) 00:09:24.730 24.320 - 24.418: 99.6127% ( 5) 00:09:24.730 24.418 - 24.517: 99.6576% ( 8) 00:09:24.730 24.517 - 24.615: 99.6913% ( 6) 00:09:24.730 24.615 - 24.714: 99.7025% ( 2) 00:09:24.730 24.714 - 24.812: 99.7137% ( 2) 00:09:24.730 24.812 - 24.911: 99.7193% ( 1) 00:09:24.730 24.911 - 25.009: 99.7250% ( 1) 00:09:24.730 25.108 - 25.206: 99.7530% ( 5) 00:09:24.730 25.206 - 25.403: 99.7755% ( 4) 00:09:24.730 25.403 - 25.600: 99.7867% ( 2) 00:09:24.730 25.600 - 25.797: 99.7979% ( 2) 00:09:24.730 25.994 - 26.191: 99.8091% ( 2) 00:09:24.730 26.191 - 26.388: 99.8148% ( 1) 00:09:24.730 26.388 - 26.585: 99.8204% ( 1) 00:09:24.730 26.585 - 26.782: 99.8316% ( 2) 00:09:24.730 26.782 - 26.978: 99.8372% ( 1) 00:09:24.730 26.978 - 27.175: 99.8541% ( 3) 00:09:24.730 27.175 - 27.372: 99.8597% ( 1) 00:09:24.730 27.372 - 27.569: 99.8653% ( 1) 00:09:24.730 28.357 - 28.554: 99.8765% ( 2) 00:09:24.730 29.145 - 29.342: 99.8821% ( 1) 00:09:24.730 30.917 - 31.114: 99.8877% ( 1) 00:09:24.730 31.114 - 31.311: 99.8933% ( 1) 00:09:24.730 31.311 - 31.508: 99.8990% ( 1) 00:09:24.730 31.508 - 31.705: 99.9046% ( 1) 00:09:24.730 31.705 - 31.902: 99.9102% ( 1) 00:09:24.730 31.902 - 32.098: 99.9158% ( 1) 00:09:24.730 35.643 - 35.840: 99.9214% ( 1) 00:09:24.730 37.809 - 38.006: 99.9270% ( 1) 00:09:24.730 38.203 - 38.400: 99.9326% ( 1) 00:09:24.730 38.794 - 38.991: 99.9439% ( 2) 00:09:24.730 39.778 - 39.975: 99.9495% ( 1) 00:09:24.730 40.172 - 40.369: 99.9551% ( 1) 00:09:24.730 40.369 - 40.566: 99.9607% ( 1) 00:09:24.730 42.142 - 42.338: 99.9663% ( 1) 00:09:24.730 42.535 - 42.732: 99.9719% ( 1) 00:09:24.730 45.292 - 45.489: 99.9775% ( 1) 00:09:24.730 47.655 - 47.852: 99.9888% ( 2) 00:09:24.730 83.889 - 84.283: 99.9944% ( 1) 00:09:24.730 104.763 - 105.551: 100.0000% ( 1) 00:09:24.730 00:09:24.730 Complete histogram 00:09:24.730 ================== 00:09:24.730 Range in us Cumulative Count 00:09:24.730 7.188 - 7.237: 0.0225% ( 4) 00:09:24.730 7.237 - 7.286: 0.5838% ( 100) 00:09:24.730 7.286 - 7.335: 5.7424% ( 919) 00:09:24.730 7.335 - 7.385: 18.5630% ( 2284) 00:09:24.730 7.385 - 7.434: 35.5206% ( 3021) 00:09:24.730 7.434 - 7.483: 49.6829% ( 2523) 00:09:24.730 7.483 - 7.532: 59.1524% ( 1687) 00:09:24.730 7.532 - 7.582: 64.6422% ( 978) 00:09:24.730 7.582 - 7.631: 67.9989% ( 598) 00:09:24.730 7.631 - 7.680: 69.8344% ( 327) 00:09:24.730 7.680 - 7.729: 70.8223% ( 176) 00:09:24.730 7.729 - 7.778: 71.3331% ( 91) 00:09:24.730 7.778 - 7.828: 71.6643% ( 59) 00:09:24.730 7.828 - 7.877: 71.9394% ( 49) 00:09:24.730 7.877 - 7.926: 72.1639% ( 40) 00:09:24.730 7.926 - 7.975: 72.4839% ( 57) 00:09:24.730 7.975 - 8.025: 72.7814% ( 53) 00:09:24.730 8.025 - 8.074: 73.1238% ( 61) 00:09:24.730 8.074 - 8.123: 73.5841% ( 82) 00:09:24.730 8.123 - 8.172: 73.8928% ( 55) 00:09:24.730 8.172 - 8.222: 74.0612% ( 30) 00:09:24.730 8.222 - 8.271: 74.2576% ( 35) 00:09:24.730 8.271 - 8.320: 74.3924% ( 24) 00:09:24.730 8.320 - 8.369: 74.5102% ( 21) 00:09:24.730 8.369 - 8.418: 74.5888% ( 14) 00:09:24.730 8.418 - 8.468: 74.6281% ( 7) 00:09:24.730 8.468 - 8.517: 74.7011% ( 13) 00:09:24.730 8.517 - 8.566: 74.7797% ( 14) 00:09:24.730 8.566 - 8.615: 74.8527% ( 13) 00:09:24.730 8.615 - 8.665: 74.8863% ( 6) 00:09:24.730 8.665 - 8.714: 74.9312% ( 8) 00:09:24.730 8.714 - 8.763: 74.9761% ( 8) 00:09:24.730 8.763 - 8.812: 75.0042% ( 5) 00:09:24.730 8.812 - 8.862: 75.0267% ( 4) 00:09:24.730 8.862 - 8.911: 75.0547% ( 5) 00:09:24.730 8.911 - 8.960: 75.0716% ( 3) 00:09:24.730 8.960 - 9.009: 75.0940% ( 4) 00:09:24.730 9.009 - 9.058: 75.2849% ( 34) 00:09:24.730 9.058 - 9.108: 76.4131% ( 201) 00:09:24.730 9.108 - 9.157: 79.5397% ( 557) 00:09:24.730 9.157 - 9.206: 83.8451% ( 767) 00:09:24.730 9.206 - 9.255: 87.3309% ( 621) 00:09:24.730 9.255 - 9.305: 89.9467% ( 466) 00:09:24.730 9.305 - 9.354: 91.8159% ( 333) 00:09:24.730 9.354 - 9.403: 93.1013% ( 229) 00:09:24.730 9.403 - 9.452: 94.0949% ( 177) 00:09:24.730 9.452 - 9.502: 94.6730% ( 103) 00:09:24.730 9.502 - 9.551: 95.1109% ( 78) 00:09:24.730 9.551 - 9.600: 95.3017% ( 34) 00:09:24.730 9.600 - 9.649: 95.4757% ( 31) 00:09:24.730 9.649 - 9.698: 95.6441% ( 30) 00:09:24.730 9.698 - 9.748: 95.7059% ( 11) 00:09:24.730 9.748 - 9.797: 95.7620% ( 10) 00:09:24.730 9.797 - 9.846: 95.8350% ( 13) 00:09:24.730 9.846 - 9.895: 95.9472% ( 20) 00:09:24.730 9.895 - 9.945: 96.0146% ( 12) 00:09:24.730 9.945 - 9.994: 96.0651% ( 9) 00:09:24.730 9.994 - 10.043: 96.1381% ( 13) 00:09:24.730 10.043 - 10.092: 96.1830% ( 8) 00:09:24.730 10.092 - 10.142: 96.2111% ( 5) 00:09:24.730 10.142 - 10.191: 96.2840% ( 13) 00:09:24.730 10.191 - 10.240: 96.3233% ( 7) 00:09:24.730 10.240 - 10.289: 96.3514% ( 5) 00:09:24.730 10.289 - 10.338: 96.3851% ( 6) 00:09:24.731 10.338 - 10.388: 96.4693% ( 15) 00:09:24.731 10.388 - 10.437: 96.5254% ( 10) 00:09:24.731 10.437 - 10.486: 96.5815% ( 10) 00:09:24.731 10.486 - 10.535: 96.6377% ( 10) 00:09:24.731 10.535 - 10.585: 96.6994% ( 11) 00:09:24.731 10.585 - 10.634: 96.7836% ( 15) 00:09:24.731 10.634 - 10.683: 96.8622% ( 14) 00:09:24.731 10.683 - 10.732: 96.9857% ( 22) 00:09:24.731 10.732 - 10.782: 97.0811% ( 17) 00:09:24.731 10.782 - 10.831: 97.1372% ( 10) 00:09:24.731 10.831 - 10.880: 97.2046% ( 12) 00:09:24.731 10.880 - 10.929: 97.2495% ( 8) 00:09:24.731 10.929 - 10.978: 97.3225% ( 13) 00:09:24.731 10.978 - 11.028: 97.3674% ( 8) 00:09:24.731 11.028 - 11.077: 97.3955% ( 5) 00:09:24.731 11.077 - 11.126: 97.4235% ( 5) 00:09:24.731 11.126 - 11.175: 97.4909% ( 12) 00:09:24.731 11.225 - 11.274: 97.5302% ( 7) 00:09:24.731 11.274 - 11.323: 97.5751% ( 8) 00:09:24.731 11.323 - 11.372: 97.6256% ( 9) 00:09:24.731 11.422 - 11.471: 97.6480% ( 4) 00:09:24.731 11.471 - 11.520: 97.6761% ( 5) 00:09:24.731 11.520 - 11.569: 97.6986% ( 4) 00:09:24.731 11.569 - 11.618: 97.7042% ( 1) 00:09:24.731 11.618 - 11.668: 97.7154% ( 2) 00:09:24.731 11.717 - 11.766: 97.7322% ( 3) 00:09:24.731 11.766 - 11.815: 97.7547% ( 4) 00:09:24.731 11.815 - 11.865: 97.7715% ( 3) 00:09:24.731 11.865 - 11.914: 97.7772% ( 1) 00:09:24.731 11.914 - 11.963: 97.7884% ( 2) 00:09:24.731 11.963 - 12.012: 97.7996% ( 2) 00:09:24.731 12.012 - 12.062: 97.8108% ( 2) 00:09:24.731 12.111 - 12.160: 97.8277% ( 3) 00:09:24.731 12.160 - 12.209: 97.8333% ( 1) 00:09:24.731 12.209 - 12.258: 97.8501% ( 3) 00:09:24.731 12.308 - 12.357: 97.8614% ( 2) 00:09:24.731 12.357 - 12.406: 97.8726% ( 2) 00:09:24.731 12.505 - 12.554: 97.8782% ( 1) 00:09:24.731 12.702 - 12.800: 97.8894% ( 2) 00:09:24.731 12.800 - 12.898: 97.9006% ( 2) 00:09:24.731 12.898 - 12.997: 97.9175% ( 3) 00:09:24.731 12.997 - 13.095: 97.9343% ( 3) 00:09:24.731 13.095 - 13.194: 97.9792% ( 8) 00:09:24.731 13.194 - 13.292: 98.0522% ( 13) 00:09:24.731 13.292 - 13.391: 98.0859% ( 6) 00:09:24.731 13.391 - 13.489: 98.1308% ( 8) 00:09:24.731 13.489 - 13.588: 98.1757% ( 8) 00:09:24.731 13.588 - 13.686: 98.2374% ( 11) 00:09:24.731 13.686 - 13.785: 98.3048% ( 12) 00:09:24.731 13.785 - 13.883: 98.4227% ( 21) 00:09:24.731 13.883 - 13.982: 98.5686% ( 26) 00:09:24.731 13.982 - 14.080: 98.6584% ( 16) 00:09:24.731 14.080 - 14.178: 98.7202% ( 11) 00:09:24.731 14.178 - 14.277: 98.7595% ( 7) 00:09:24.731 14.277 - 14.375: 98.7819% ( 4) 00:09:24.731 14.375 - 14.474: 98.8549% ( 13) 00:09:24.731 14.474 - 14.572: 98.8942% ( 7) 00:09:24.731 14.572 - 14.671: 98.9279% ( 6) 00:09:24.731 14.671 - 14.769: 98.9447% ( 3) 00:09:24.731 14.769 - 14.868: 98.9615% ( 3) 00:09:24.731 14.868 - 14.966: 99.0121% ( 9) 00:09:24.731 14.966 - 15.065: 99.0345% ( 4) 00:09:24.731 15.065 - 15.163: 99.0514% ( 3) 00:09:24.731 15.163 - 15.262: 99.0570% ( 1) 00:09:24.731 15.262 - 15.360: 99.0626% ( 1) 00:09:24.731 15.360 - 15.458: 99.0738% ( 2) 00:09:24.731 15.458 - 15.557: 99.0850% ( 2) 00:09:24.731 15.557 - 15.655: 99.1019% ( 3) 00:09:24.731 15.655 - 15.754: 99.1243% ( 4) 00:09:24.731 15.754 - 15.852: 99.1299% ( 1) 00:09:24.731 15.852 - 15.951: 99.1524% ( 4) 00:09:24.731 15.951 - 16.049: 99.1749% ( 4) 00:09:24.731 16.049 - 16.148: 99.1917% ( 3) 00:09:24.731 16.148 - 16.246: 99.1973% ( 1) 00:09:24.731 16.345 - 16.443: 99.2254% ( 5) 00:09:24.731 16.443 - 16.542: 99.2534% ( 5) 00:09:24.731 16.542 - 16.640: 99.2591% ( 1) 00:09:24.731 16.640 - 16.738: 99.2871% ( 5) 00:09:24.731 16.738 - 16.837: 99.3264% ( 7) 00:09:24.731 16.837 - 16.935: 99.3433% ( 3) 00:09:24.731 16.935 - 17.034: 99.3882% ( 8) 00:09:24.731 17.034 - 17.132: 99.4106% ( 4) 00:09:24.731 17.132 - 17.231: 99.4331% ( 4) 00:09:24.731 17.231 - 17.329: 99.4836% ( 9) 00:09:24.731 17.329 - 17.428: 99.5341% ( 9) 00:09:24.731 17.428 - 17.526: 99.5846% ( 9) 00:09:24.731 17.526 - 17.625: 99.6464% ( 11) 00:09:24.731 17.625 - 17.723: 99.6688% ( 4) 00:09:24.731 17.723 - 17.822: 99.6744% ( 1) 00:09:24.731 17.822 - 17.920: 99.7081% ( 6) 00:09:24.731 17.920 - 18.018: 99.7193% ( 2) 00:09:24.731 18.018 - 18.117: 99.7306% ( 2) 00:09:24.731 18.117 - 18.215: 99.7474% ( 3) 00:09:24.731 18.314 - 18.412: 99.7530% ( 1) 00:09:24.731 18.511 - 18.609: 99.7642% ( 2) 00:09:24.731 18.609 - 18.708: 99.7699% ( 1) 00:09:24.731 18.708 - 18.806: 99.7755% ( 1) 00:09:24.731 18.905 - 19.003: 99.7867% ( 2) 00:09:24.731 19.003 - 19.102: 99.7923% ( 1) 00:09:24.731 19.102 - 19.200: 99.7979% ( 1) 00:09:24.731 19.397 - 19.495: 99.8091% ( 2) 00:09:24.731 19.988 - 20.086: 99.8148% ( 1) 00:09:24.731 20.086 - 20.185: 99.8204% ( 1) 00:09:24.731 20.382 - 20.480: 99.8260% ( 1) 00:09:24.731 20.578 - 20.677: 99.8316% ( 1) 00:09:24.731 21.268 - 21.366: 99.8372% ( 1) 00:09:24.731 21.366 - 21.465: 99.8428% ( 1) 00:09:24.731 21.563 - 21.662: 99.8484% ( 1) 00:09:24.731 21.662 - 21.760: 99.8541% ( 1) 00:09:24.731 21.957 - 22.055: 99.8597% ( 1) 00:09:24.731 22.055 - 22.154: 99.8653% ( 1) 00:09:24.731 22.154 - 22.252: 99.8765% ( 2) 00:09:24.731 22.351 - 22.449: 99.8821% ( 1) 00:09:24.731 22.449 - 22.548: 99.8877% ( 1) 00:09:24.731 22.942 - 23.040: 99.8933% ( 1) 00:09:24.731 26.782 - 26.978: 99.8990% ( 1) 00:09:24.731 26.978 - 27.175: 99.9046% ( 1) 00:09:24.731 27.569 - 27.766: 99.9158% ( 2) 00:09:24.731 27.766 - 27.963: 99.9214% ( 1) 00:09:24.731 28.160 - 28.357: 99.9270% ( 1) 00:09:24.731 29.145 - 29.342: 99.9326% ( 1) 00:09:24.731 29.932 - 30.129: 99.9383% ( 1) 00:09:24.731 30.523 - 30.720: 99.9439% ( 1) 00:09:24.731 30.917 - 31.114: 99.9495% ( 1) 00:09:24.731 37.809 - 38.006: 99.9551% ( 1) 00:09:24.731 38.203 - 38.400: 99.9607% ( 1) 00:09:24.731 42.338 - 42.535: 99.9663% ( 1) 00:09:24.731 50.806 - 51.200: 99.9719% ( 1) 00:09:24.731 53.957 - 54.351: 99.9775% ( 1) 00:09:24.731 57.895 - 58.289: 99.9832% ( 1) 00:09:24.731 58.289 - 58.683: 99.9888% ( 1) 00:09:24.731 65.378 - 65.772: 99.9944% ( 1) 00:09:24.731 266.240 - 267.815: 100.0000% ( 1) 00:09:24.731 00:09:24.731 00:09:24.731 real 0m1.185s 00:09:24.731 user 0m1.041s 00:09:24.731 sys 0m0.099s 00:09:24.731 ************************************ 00:09:24.731 END TEST nvme_overhead 00:09:24.731 ************************************ 00:09:24.731 11:04:53 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.731 11:04:53 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 11:04:53 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:24.731 11:04:53 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:24.731 11:04:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.731 11:04:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 ************************************ 00:09:24.731 START TEST nvme_arbitration 00:09:24.731 ************************************ 00:09:24.731 11:04:53 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:28.033 Initializing NVMe Controllers 00:09:28.033 Attached to 0000:00:13.0 00:09:28.033 Attached to 0000:00:10.0 00:09:28.033 Attached to 0000:00:11.0 00:09:28.033 Attached to 0000:00:12.0 00:09:28.033 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:28.033 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:28.033 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:28.033 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:28.033 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:28.033 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:28.033 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:28.033 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:28.033 Initialization complete. Launching workers. 00:09:28.033 Starting thread on core 1 with urgent priority queue 00:09:28.033 Starting thread on core 2 with urgent priority queue 00:09:28.033 Starting thread on core 3 with urgent priority queue 00:09:28.033 Starting thread on core 0 with urgent priority queue 00:09:28.033 QEMU NVMe Ctrl (12343 ) core 0: 6336.00 IO/s 15.78 secs/100000 ios 00:09:28.033 QEMU NVMe Ctrl (12342 ) core 0: 6342.67 IO/s 15.77 secs/100000 ios 00:09:28.033 QEMU NVMe Ctrl (12340 ) core 1: 6351.33 IO/s 15.74 secs/100000 ios 00:09:28.033 QEMU NVMe Ctrl (12342 ) core 1: 6352.00 IO/s 15.74 secs/100000 ios 00:09:28.033 QEMU NVMe Ctrl (12341 ) core 2: 5935.67 IO/s 16.85 secs/100000 ios 00:09:28.033 QEMU NVMe Ctrl (12342 ) core 3: 5917.00 IO/s 16.90 secs/100000 ios 00:09:28.033 ======================================================== 00:09:28.033 00:09:28.033 00:09:28.033 real 0m3.208s 00:09:28.033 user 0m8.994s 00:09:28.033 sys 0m0.095s 00:09:28.033 ************************************ 00:09:28.033 END TEST nvme_arbitration 00:09:28.033 ************************************ 00:09:28.033 11:04:56 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.033 11:04:56 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:28.033 11:04:56 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:28.033 11:04:56 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:28.033 11:04:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.033 11:04:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.033 ************************************ 00:09:28.033 START TEST nvme_single_aen 00:09:28.033 ************************************ 00:09:28.033 11:04:56 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:28.033 Asynchronous Event Request test 00:09:28.033 Attached to 0000:00:13.0 00:09:28.033 Attached to 0000:00:10.0 00:09:28.033 Attached to 0000:00:11.0 00:09:28.033 Attached to 0000:00:12.0 00:09:28.033 Reset controller to setup AER completions for this process 00:09:28.033 Registering asynchronous event callbacks... 00:09:28.033 Getting orig temperature thresholds of all controllers 00:09:28.033 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:28.033 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:28.033 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:28.033 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:28.033 Setting all controllers temperature threshold low to trigger AER 00:09:28.033 Waiting for all controllers temperature threshold to be set lower 00:09:28.033 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:28.033 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:28.033 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:28.033 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:28.033 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:28.033 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:28.033 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:28.033 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:28.033 Waiting for all controllers to trigger AER and reset threshold 00:09:28.033 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.033 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.033 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.033 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.033 Cleaning up... 00:09:28.033 00:09:28.033 real 0m0.206s 00:09:28.033 user 0m0.067s 00:09:28.033 sys 0m0.087s 00:09:28.033 ************************************ 00:09:28.033 END TEST nvme_single_aen 00:09:28.033 ************************************ 00:09:28.033 11:04:56 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.033 11:04:56 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 11:04:56 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:28.294 11:04:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.294 11:04:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.294 11:04:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 ************************************ 00:09:28.294 START TEST nvme_doorbell_aers 00:09:28.294 ************************************ 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.294 11:04:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:28.294 11:04:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:28.294 11:04:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.294 11:04:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:28.294 11:04:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:28.556 [2024-11-27 11:04:57.181048] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:09:38.552 Executing: test_write_invalid_db 00:09:38.552 Waiting for AER completion... 00:09:38.552 Failure: test_write_invalid_db 00:09:38.552 00:09:38.552 Executing: test_invalid_db_write_overflow_sq 00:09:38.552 Waiting for AER completion... 00:09:38.552 Failure: test_invalid_db_write_overflow_sq 00:09:38.552 00:09:38.552 Executing: test_invalid_db_write_overflow_cq 00:09:38.552 Waiting for AER completion... 00:09:38.552 Failure: test_invalid_db_write_overflow_cq 00:09:38.552 00:09:38.552 11:05:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:38.552 11:05:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:38.552 [2024-11-27 11:05:07.222314] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:09:48.519 Executing: test_write_invalid_db 00:09:48.519 Waiting for AER completion... 00:09:48.519 Failure: test_write_invalid_db 00:09:48.519 00:09:48.519 Executing: test_invalid_db_write_overflow_sq 00:09:48.519 Waiting for AER completion... 00:09:48.519 Failure: test_invalid_db_write_overflow_sq 00:09:48.519 00:09:48.519 Executing: test_invalid_db_write_overflow_cq 00:09:48.519 Waiting for AER completion... 00:09:48.519 Failure: test_invalid_db_write_overflow_cq 00:09:48.519 00:09:48.519 11:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:48.519 11:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:48.519 [2024-11-27 11:05:17.260469] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:09:58.479 Executing: test_write_invalid_db 00:09:58.479 Waiting for AER completion... 00:09:58.479 Failure: test_write_invalid_db 00:09:58.479 00:09:58.479 Executing: test_invalid_db_write_overflow_sq 00:09:58.479 Waiting for AER completion... 00:09:58.479 Failure: test_invalid_db_write_overflow_sq 00:09:58.479 00:09:58.479 Executing: test_invalid_db_write_overflow_cq 00:09:58.479 Waiting for AER completion... 00:09:58.479 Failure: test_invalid_db_write_overflow_cq 00:09:58.479 00:09:58.479 11:05:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:58.479 11:05:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:58.479 [2024-11-27 11:05:27.283162] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.453 Executing: test_write_invalid_db 00:10:08.453 Waiting for AER completion... 00:10:08.453 Failure: test_write_invalid_db 00:10:08.453 00:10:08.453 Executing: test_invalid_db_write_overflow_sq 00:10:08.453 Waiting for AER completion... 00:10:08.453 Failure: test_invalid_db_write_overflow_sq 00:10:08.453 00:10:08.453 Executing: test_invalid_db_write_overflow_cq 00:10:08.453 Waiting for AER completion... 00:10:08.453 Failure: test_invalid_db_write_overflow_cq 00:10:08.453 00:10:08.453 00:10:08.453 real 0m40.192s 00:10:08.453 user 0m34.288s 00:10:08.453 sys 0m5.534s 00:10:08.453 11:05:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.453 11:05:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 ************************************ 00:10:08.453 END TEST nvme_doorbell_aers 00:10:08.453 ************************************ 00:10:08.453 11:05:37 nvme -- nvme/nvme.sh@97 -- # uname 00:10:08.453 11:05:37 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:08.453 11:05:37 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:08.453 11:05:37 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:08.453 11:05:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.453 11:05:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 ************************************ 00:10:08.453 START TEST nvme_multi_aen 00:10:08.453 ************************************ 00:10:08.453 11:05:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:08.710 [2024-11-27 11:05:37.334805] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.334872] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.334882] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.336880] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.337015] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.337049] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.339417] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.339734] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.710 [2024-11-27 11:05:37.340054] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.711 [2024-11-27 11:05:37.342362] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.711 [2024-11-27 11:05:37.342692] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.711 [2024-11-27 11:05:37.342978] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75461) is not found. Dropping the request. 00:10:08.711 Child process pid: 75981 00:10:08.711 [Child] Asynchronous Event Request test 00:10:08.711 [Child] Attached to 0000:00:13.0 00:10:08.711 [Child] Attached to 0000:00:10.0 00:10:08.711 [Child] Attached to 0000:00:11.0 00:10:08.711 [Child] Attached to 0000:00:12.0 00:10:08.711 [Child] Registering asynchronous event callbacks... 00:10:08.711 [Child] Getting orig temperature thresholds of all controllers 00:10:08.711 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:08.711 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 [Child] Cleaning up... 00:10:08.711 Asynchronous Event Request test 00:10:08.711 Attached to 0000:00:13.0 00:10:08.711 Attached to 0000:00:10.0 00:10:08.711 Attached to 0000:00:11.0 00:10:08.711 Attached to 0000:00:12.0 00:10:08.711 Reset controller to setup AER completions for this process 00:10:08.711 Registering asynchronous event callbacks... 00:10:08.711 Getting orig temperature thresholds of all controllers 00:10:08.711 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.711 Setting all controllers temperature threshold low to trigger AER 00:10:08.711 Waiting for all controllers temperature threshold to be set lower 00:10:08.711 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:08.711 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:08.711 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:08.711 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.711 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:08.711 Waiting for all controllers to trigger AER and reset threshold 00:10:08.711 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.711 Cleaning up... 00:10:08.711 00:10:08.711 real 0m0.373s 00:10:08.711 user 0m0.104s 00:10:08.711 sys 0m0.162s 00:10:08.711 11:05:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.711 11:05:37 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:08.711 ************************************ 00:10:08.711 END TEST nvme_multi_aen 00:10:08.711 ************************************ 00:10:08.711 11:05:37 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:08.711 11:05:37 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:08.711 11:05:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.711 11:05:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.711 ************************************ 00:10:08.711 START TEST nvme_startup 00:10:08.711 ************************************ 00:10:08.711 11:05:37 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:08.969 Initializing NVMe Controllers 00:10:08.969 Attached to 0000:00:13.0 00:10:08.969 Attached to 0000:00:10.0 00:10:08.969 Attached to 0000:00:11.0 00:10:08.969 Attached to 0000:00:12.0 00:10:08.969 Initialization complete. 00:10:08.969 Time used:126469.891 (us). 00:10:08.969 ************************************ 00:10:08.969 END TEST nvme_startup 00:10:08.969 ************************************ 00:10:08.969 00:10:08.969 real 0m0.182s 00:10:08.969 user 0m0.050s 00:10:08.969 sys 0m0.087s 00:10:08.969 11:05:37 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.969 11:05:37 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:08.969 11:05:37 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:08.969 11:05:37 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.969 11:05:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.969 11:05:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.969 ************************************ 00:10:08.969 START TEST nvme_multi_secondary 00:10:08.969 ************************************ 00:10:08.969 11:05:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:10:08.969 11:05:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76032 00:10:08.969 11:05:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76033 00:10:08.969 11:05:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:08.969 11:05:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:08.969 11:05:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:12.245 Initializing NVMe Controllers 00:10:12.245 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:12.245 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:12.245 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:12.245 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:12.245 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:12.245 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:12.245 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:12.245 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:12.245 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:12.245 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:12.245 Initialization complete. Launching workers. 00:10:12.245 ======================================================== 00:10:12.245 Latency(us) 00:10:12.245 Device Information : IOPS MiB/s Average min max 00:10:12.245 PCIE (0000:00:13.0) NSID 1 from core 1: 7921.09 30.94 2019.51 1052.43 5589.65 00:10:12.245 PCIE (0000:00:10.0) NSID 1 from core 1: 7921.09 30.94 2018.66 1074.11 6508.88 00:10:12.245 PCIE (0000:00:11.0) NSID 1 from core 1: 7921.09 30.94 2019.61 1041.53 6338.32 00:10:12.245 PCIE (0000:00:12.0) NSID 1 from core 1: 7921.09 30.94 2019.65 1052.20 6327.83 00:10:12.245 PCIE (0000:00:12.0) NSID 2 from core 1: 7921.09 30.94 2019.75 1107.70 6084.50 00:10:12.245 PCIE (0000:00:12.0) NSID 3 from core 1: 7921.09 30.94 2019.73 1039.98 5392.67 00:10:12.245 ======================================================== 00:10:12.245 Total : 47526.53 185.65 2019.49 1039.98 6508.88 00:10:12.245 00:10:12.503 Initializing NVMe Controllers 00:10:12.503 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:12.503 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:12.503 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:12.503 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:12.503 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:12.503 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:12.503 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:12.503 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:12.503 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:12.503 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:12.503 Initialization complete. Launching workers. 00:10:12.503 ======================================================== 00:10:12.503 Latency(us) 00:10:12.503 Device Information : IOPS MiB/s Average min max 00:10:12.503 PCIE (0000:00:13.0) NSID 1 from core 2: 3229.71 12.62 4953.61 921.66 32916.04 00:10:12.503 PCIE (0000:00:10.0) NSID 1 from core 2: 3229.71 12.62 4951.95 996.38 31985.61 00:10:12.503 PCIE (0000:00:11.0) NSID 1 from core 2: 3229.71 12.62 4953.48 989.36 32192.09 00:10:12.503 PCIE (0000:00:12.0) NSID 1 from core 2: 3229.71 12.62 4953.79 1000.69 32277.09 00:10:12.503 PCIE (0000:00:12.0) NSID 2 from core 2: 3229.71 12.62 4953.68 985.34 25479.18 00:10:12.503 PCIE (0000:00:12.0) NSID 3 from core 2: 3229.71 12.62 4953.57 962.47 33752.80 00:10:12.503 ======================================================== 00:10:12.503 Total : 19378.25 75.70 4953.35 921.66 33752.80 00:10:12.503 00:10:12.503 11:05:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76032 00:10:14.401 Initializing NVMe Controllers 00:10:14.401 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:14.401 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:14.401 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:14.401 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:14.401 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:14.401 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:14.401 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:14.401 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:14.401 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:14.401 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:14.401 Initialization complete. Launching workers. 00:10:14.401 ======================================================== 00:10:14.401 Latency(us) 00:10:14.401 Device Information : IOPS MiB/s Average min max 00:10:14.401 PCIE (0000:00:13.0) NSID 1 from core 0: 11260.39 43.99 1420.55 706.54 5689.92 00:10:14.401 PCIE (0000:00:10.0) NSID 1 from core 0: 11260.39 43.99 1419.70 687.49 5669.92 00:10:14.401 PCIE (0000:00:11.0) NSID 1 from core 0: 11260.39 43.99 1420.52 658.57 5026.76 00:10:14.401 PCIE (0000:00:12.0) NSID 1 from core 0: 11260.39 43.99 1420.49 511.78 4800.16 00:10:14.401 PCIE (0000:00:12.0) NSID 2 from core 0: 11260.39 43.99 1420.48 457.95 5185.02 00:10:14.401 PCIE (0000:00:12.0) NSID 3 from core 0: 11260.39 43.99 1420.41 382.98 5828.29 00:10:14.401 ======================================================== 00:10:14.401 Total : 67562.34 263.92 1420.36 382.98 5828.29 00:10:14.401 00:10:14.401 11:05:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76033 00:10:14.401 11:05:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76102 00:10:14.401 11:05:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:14.401 11:05:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76103 00:10:14.401 11:05:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:14.401 11:05:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:17.748 Initializing NVMe Controllers 00:10:17.748 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:17.748 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.748 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.748 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.748 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:17.748 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:17.748 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:17.748 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:17.748 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:17.748 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:17.748 Initialization complete. Launching workers. 00:10:17.748 ======================================================== 00:10:17.748 Latency(us) 00:10:17.748 Device Information : IOPS MiB/s Average min max 00:10:17.748 PCIE (0000:00:13.0) NSID 1 from core 1: 6886.66 26.90 2322.92 726.29 14300.57 00:10:17.748 PCIE (0000:00:10.0) NSID 1 from core 1: 6886.66 26.90 2322.13 716.93 15422.80 00:10:17.748 PCIE (0000:00:11.0) NSID 1 from core 1: 6886.66 26.90 2323.31 730.30 15271.20 00:10:17.748 PCIE (0000:00:12.0) NSID 1 from core 1: 6886.66 26.90 2323.34 742.65 13362.46 00:10:17.748 PCIE (0000:00:12.0) NSID 2 from core 1: 6886.66 26.90 2323.32 724.29 10983.19 00:10:17.748 PCIE (0000:00:12.0) NSID 3 from core 1: 6886.66 26.90 2323.32 729.49 14331.23 00:10:17.748 ======================================================== 00:10:17.748 Total : 41319.96 161.41 2323.06 716.93 15422.80 00:10:17.748 00:10:17.748 Initializing NVMe Controllers 00:10:17.748 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:17.748 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.748 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.748 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.748 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:17.748 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:17.748 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:17.748 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:17.748 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:17.748 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:17.748 Initialization complete. Launching workers. 00:10:17.748 ======================================================== 00:10:17.748 Latency(us) 00:10:17.748 Device Information : IOPS MiB/s Average min max 00:10:17.748 PCIE (0000:00:13.0) NSID 1 from core 0: 6958.45 27.18 2298.91 723.09 9957.04 00:10:17.748 PCIE (0000:00:10.0) NSID 1 from core 0: 6958.45 27.18 2297.87 706.18 9808.13 00:10:17.748 PCIE (0000:00:11.0) NSID 1 from core 0: 6958.45 27.18 2298.78 650.60 10518.06 00:10:17.748 PCIE (0000:00:12.0) NSID 1 from core 0: 6958.45 27.18 2298.70 499.56 11201.92 00:10:17.748 PCIE (0000:00:12.0) NSID 2 from core 0: 6958.45 27.18 2298.64 449.18 9906.32 00:10:17.748 PCIE (0000:00:12.0) NSID 3 from core 0: 6958.45 27.18 2298.55 363.40 9942.26 00:10:17.748 ======================================================== 00:10:17.748 Total : 41750.72 163.09 2298.58 363.40 11201.92 00:10:17.748 00:10:19.660 Initializing NVMe Controllers 00:10:19.660 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:19.660 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:19.660 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:19.660 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:19.660 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:19.660 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:19.660 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:19.660 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:19.660 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:19.660 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:19.660 Initialization complete. Launching workers. 00:10:19.660 ======================================================== 00:10:19.660 Latency(us) 00:10:19.660 Device Information : IOPS MiB/s Average min max 00:10:19.660 PCIE (0000:00:13.0) NSID 1 from core 2: 2886.90 11.28 5541.85 773.29 29787.22 00:10:19.660 PCIE (0000:00:10.0) NSID 1 from core 2: 2886.90 11.28 5541.25 778.35 30083.67 00:10:19.660 PCIE (0000:00:11.0) NSID 1 from core 2: 2886.90 11.28 5542.31 727.45 29782.40 00:10:19.660 PCIE (0000:00:12.0) NSID 1 from core 2: 2886.90 11.28 5542.49 771.15 31312.36 00:10:19.660 PCIE (0000:00:12.0) NSID 2 from core 2: 2886.90 11.28 5542.37 773.79 33556.27 00:10:19.660 PCIE (0000:00:12.0) NSID 3 from core 2: 2886.90 11.28 5541.99 770.48 29680.09 00:10:19.660 ======================================================== 00:10:19.660 Total : 17321.41 67.66 5542.05 727.45 33556.27 00:10:19.660 00:10:19.660 ************************************ 00:10:19.660 END TEST nvme_multi_secondary 00:10:19.660 ************************************ 00:10:19.660 11:05:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76102 00:10:19.660 11:05:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76103 00:10:19.660 00:10:19.660 real 0m10.581s 00:10:19.660 user 0m18.180s 00:10:19.660 sys 0m0.559s 00:10:19.660 11:05:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.660 11:05:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:19.660 11:05:48 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:19.660 11:05:48 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75069 ]] 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1090 -- # kill 75069 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1091 -- # wait 75069 00:10:19.660 [2024-11-27 11:05:48.437941] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.438078] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.438105] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.438130] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.439095] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.439169] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.439193] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.439218] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.440462] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.440578] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.440608] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.440640] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.441917] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.442006] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.442035] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 [2024-11-27 11:05:48.442065] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75980) is not found. Dropping the request. 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:10:19.660 11:05:48 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.660 11:05:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.660 ************************************ 00:10:19.660 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:19.660 ************************************ 00:10:19.660 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:19.918 * Looking for test storage... 00:10:19.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.918 --rc genhtml_branch_coverage=1 00:10:19.918 --rc genhtml_function_coverage=1 00:10:19.918 --rc genhtml_legend=1 00:10:19.918 --rc geninfo_all_blocks=1 00:10:19.918 --rc geninfo_unexecuted_blocks=1 00:10:19.918 00:10:19.918 ' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.918 --rc genhtml_branch_coverage=1 00:10:19.918 --rc genhtml_function_coverage=1 00:10:19.918 --rc genhtml_legend=1 00:10:19.918 --rc geninfo_all_blocks=1 00:10:19.918 --rc geninfo_unexecuted_blocks=1 00:10:19.918 00:10:19.918 ' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.918 --rc genhtml_branch_coverage=1 00:10:19.918 --rc genhtml_function_coverage=1 00:10:19.918 --rc genhtml_legend=1 00:10:19.918 --rc geninfo_all_blocks=1 00:10:19.918 --rc geninfo_unexecuted_blocks=1 00:10:19.918 00:10:19.918 ' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:19.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.918 --rc genhtml_branch_coverage=1 00:10:19.918 --rc genhtml_function_coverage=1 00:10:19.918 --rc genhtml_legend=1 00:10:19.918 --rc geninfo_all_blocks=1 00:10:19.918 --rc geninfo_unexecuted_blocks=1 00:10:19.918 00:10:19.918 ' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:19.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76270 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76270 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76270 ']' 00:10:19.918 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.919 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.919 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.919 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.919 11:05:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:20.177 [2024-11-27 11:05:48.801708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:20.177 [2024-11-27 11:05:48.801983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76270 ] 00:10:20.177 [2024-11-27 11:05:48.958658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.177 [2024-11-27 11:05:48.993666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.177 [2024-11-27 11:05:48.993885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.177 [2024-11-27 11:05:48.994229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.177 [2024-11-27 11:05:48.993965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:21.112 nvme0n1 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_IGGIY.txt 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:21.112 true 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732705549 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76293 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:21.112 11:05:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:23.024 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:23.024 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.024 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:23.024 [2024-11-27 11:05:51.722627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:23.024 [2024-11-27 11:05:51.722995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:23.024 [2024-11-27 11:05:51.723089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:23.024 [2024-11-27 11:05:51.723177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:23.024 [2024-11-27 11:05:51.724859] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:23.024 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76293 00:10:23.024 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.024 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76293 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76293 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_IGGIY.txt 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_IGGIY.txt 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76270 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76270 ']' 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76270 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76270 00:10:23.025 killing process with pid 76270 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76270' 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76270 00:10:23.025 11:05:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76270 00:10:23.284 11:05:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:23.284 11:05:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:23.284 ************************************ 00:10:23.284 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:23.284 ************************************ 00:10:23.284 00:10:23.284 real 0m3.558s 00:10:23.284 user 0m12.653s 00:10:23.284 sys 0m0.466s 00:10:23.284 11:05:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.284 11:05:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:23.284 11:05:52 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:23.284 11:05:52 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:23.284 11:05:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.284 11:05:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.284 11:05:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.284 ************************************ 00:10:23.284 START TEST nvme_fio 00:10:23.284 ************************************ 00:10:23.284 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:23.284 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:23.284 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:23.284 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:23.284 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:23.284 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:23.284 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:23.284 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:23.284 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:23.544 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:23.544 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:23.544 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:23.806 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:23.806 11:05:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:23.806 11:05:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:24.065 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:24.065 fio-3.35 00:10:24.065 Starting 1 thread 00:10:30.669 00:10:30.669 test: (groupid=0, jobs=1): err= 0: pid=76416: Wed Nov 27 11:05:58 2024 00:10:30.669 read: IOPS=21.5k, BW=83.8MiB/s (87.9MB/s)(168MiB/2001msec) 00:10:30.669 slat (nsec): min=3331, max=65881, avg=5219.00, stdev=2535.92 00:10:30.669 clat (usec): min=231, max=9884, avg=2978.64, stdev=988.94 00:10:30.669 lat (usec): min=235, max=9888, avg=2983.86, stdev=990.27 00:10:30.669 clat percentiles (usec): 00:10:30.669 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:10:30.669 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2704], 00:10:30.669 | 70.00th=[ 2900], 80.00th=[ 3294], 90.00th=[ 4359], 95.00th=[ 5342], 00:10:30.669 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 8160], 99.95th=[ 8586], 00:10:30.669 | 99.99th=[ 9765] 00:10:30.669 bw ( KiB/s): min=79784, max=95248, per=100.00%, avg=86341.33, stdev=7995.21, samples=3 00:10:30.669 iops : min=19946, max=23812, avg=21585.33, stdev=1998.80, samples=3 00:10:30.669 write: IOPS=21.3k, BW=83.2MiB/s (87.2MB/s)(166MiB/2001msec); 0 zone resets 00:10:30.669 slat (nsec): min=3457, max=70589, avg=5384.16, stdev=2500.98 00:10:30.669 clat (usec): min=210, max=9954, avg=2989.48, stdev=998.66 00:10:30.669 lat (usec): min=215, max=9958, avg=2994.87, stdev=999.94 00:10:30.669 clat percentiles (usec): 00:10:30.669 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:10:30.669 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2704], 00:10:30.669 | 70.00th=[ 2900], 80.00th=[ 3326], 90.00th=[ 4359], 95.00th=[ 5407], 00:10:30.669 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 8160], 99.95th=[ 8717], 00:10:30.669 | 99.99th=[ 9765] 00:10:30.669 bw ( KiB/s): min=79696, max=94992, per=100.00%, avg=86456.00, stdev=7801.12, samples=3 00:10:30.669 iops : min=19924, max=23748, avg=21614.00, stdev=1950.28, samples=3 00:10:30.669 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:10:30.669 lat (msec) : 2=1.45%, 4=85.66%, 10=12.85% 00:10:30.669 cpu : usr=99.15%, sys=0.00%, ctx=10, majf=0, minf=627 00:10:30.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:30.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.669 issued rwts: total=42940,42613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.669 00:10:30.669 Run status group 0 (all jobs): 00:10:30.669 READ: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:10:30.669 WRITE: bw=83.2MiB/s (87.2MB/s), 83.2MiB/s-83.2MiB/s (87.2MB/s-87.2MB/s), io=166MiB (175MB), run=2001-2001msec 00:10:30.669 ----------------------------------------------------- 00:10:30.669 Suppressions used: 00:10:30.669 count bytes template 00:10:30.669 1 32 /usr/src/fio/parse.c 00:10:30.669 1 8 libtcmalloc_minimal.so 00:10:30.669 ----------------------------------------------------- 00:10:30.669 00:10:30.669 11:05:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:30.669 11:05:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:30.669 11:05:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:30.669 11:05:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:30.669 11:05:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:30.669 11:05:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:30.669 11:05:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:30.669 11:05:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:30.669 11:05:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:30.669 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:30.669 fio-3.35 00:10:30.669 Starting 1 thread 00:10:37.248 00:10:37.248 test: (groupid=0, jobs=1): err= 0: pid=76472: Wed Nov 27 11:06:05 2024 00:10:37.248 read: IOPS=20.8k, BW=81.2MiB/s (85.1MB/s)(162MiB/2001msec) 00:10:37.248 slat (usec): min=3, max=442, avg= 5.28, stdev= 3.42 00:10:37.248 clat (usec): min=200, max=11030, avg=3074.43, stdev=1114.09 00:10:37.248 lat (usec): min=213, max=11062, avg=3079.71, stdev=1115.44 00:10:37.248 clat percentiles (usec): 00:10:37.248 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:10:37.248 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2769], 00:10:37.248 | 70.00th=[ 2999], 80.00th=[ 3621], 90.00th=[ 4883], 95.00th=[ 5669], 00:10:37.248 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 8094], 99.95th=[ 8848], 00:10:37.248 | 99.99th=[10683] 00:10:37.248 bw ( KiB/s): min=74520, max=86760, per=99.47%, avg=82674.67, stdev=7062.15, samples=3 00:10:37.248 iops : min=18630, max=21690, avg=20668.67, stdev=1765.54, samples=3 00:10:37.248 write: IOPS=20.7k, BW=80.8MiB/s (84.8MB/s)(162MiB/2001msec); 0 zone resets 00:10:37.248 slat (usec): min=3, max=294, avg= 5.41, stdev= 3.03 00:10:37.248 clat (usec): min=229, max=10871, avg=3079.06, stdev=1100.97 00:10:37.248 lat (usec): min=233, max=10881, avg=3084.47, stdev=1102.30 00:10:37.248 clat percentiles (usec): 00:10:37.248 | 1.00th=[ 2008], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:10:37.248 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2802], 00:10:37.248 | 70.00th=[ 3032], 80.00th=[ 3589], 90.00th=[ 4883], 95.00th=[ 5604], 00:10:37.248 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8094], 99.95th=[ 8979], 00:10:37.248 | 99.99th=[10421] 00:10:37.248 bw ( KiB/s): min=75080, max=86648, per=99.93%, avg=82720.00, stdev=6617.32, samples=3 00:10:37.248 iops : min=18770, max=21662, avg=20680.00, stdev=1654.33, samples=3 00:10:37.248 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:37.248 lat (msec) : 2=1.02%, 4=82.50%, 10=16.43%, 20=0.02% 00:10:37.248 cpu : usr=98.75%, sys=0.15%, ctx=19, majf=0, minf=626 00:10:37.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:37.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.248 issued rwts: total=41579,41409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.248 00:10:37.248 Run status group 0 (all jobs): 00:10:37.248 READ: bw=81.2MiB/s (85.1MB/s), 81.2MiB/s-81.2MiB/s (85.1MB/s-85.1MB/s), io=162MiB (170MB), run=2001-2001msec 00:10:37.248 WRITE: bw=80.8MiB/s (84.8MB/s), 80.8MiB/s-80.8MiB/s (84.8MB/s-84.8MB/s), io=162MiB (170MB), run=2001-2001msec 00:10:37.248 ----------------------------------------------------- 00:10:37.248 Suppressions used: 00:10:37.248 count bytes template 00:10:37.248 1 32 /usr/src/fio/parse.c 00:10:37.248 1 8 libtcmalloc_minimal.so 00:10:37.248 ----------------------------------------------------- 00:10:37.248 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:37.248 11:06:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:37.248 11:06:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:37.248 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:37.248 fio-3.35 00:10:37.248 Starting 1 thread 00:10:45.360 00:10:45.360 test: (groupid=0, jobs=1): err= 0: pid=76533: Wed Nov 27 11:06:12 2024 00:10:45.360 read: IOPS=22.9k, BW=89.4MiB/s (93.8MB/s)(179MiB/2001msec) 00:10:45.360 slat (nsec): min=3369, max=74209, avg=5366.60, stdev=2336.00 00:10:45.360 clat (usec): min=202, max=9062, avg=2790.81, stdev=826.81 00:10:45.360 lat (usec): min=207, max=9096, avg=2796.17, stdev=828.27 00:10:45.360 clat percentiles (usec): 00:10:45.360 | 1.00th=[ 1811], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2442], 00:10:45.360 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:10:45.360 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 3326], 95.00th=[ 4883], 00:10:45.360 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 7898], 00:10:45.360 | 99.99th=[ 8848] 00:10:45.360 bw ( KiB/s): min=85176, max=94968, per=99.08%, avg=90754.67, stdev=5036.76, samples=3 00:10:45.360 iops : min=21294, max=23742, avg=22688.67, stdev=1259.19, samples=3 00:10:45.360 write: IOPS=22.8k, BW=88.9MiB/s (93.2MB/s)(178MiB/2001msec); 0 zone resets 00:10:45.360 slat (nsec): min=3487, max=78635, avg=5747.13, stdev=2384.13 00:10:45.360 clat (usec): min=211, max=8925, avg=2796.81, stdev=828.00 00:10:45.360 lat (usec): min=216, max=8935, avg=2802.55, stdev=829.51 00:10:45.360 clat percentiles (usec): 00:10:45.360 | 1.00th=[ 1811], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2442], 00:10:45.360 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:10:45.360 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 3326], 95.00th=[ 4948], 00:10:45.360 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 7898], 00:10:45.360 | 99.99th=[ 8586] 00:10:45.360 bw ( KiB/s): min=87096, max=93776, per=99.90%, avg=90946.67, stdev=3455.13, samples=3 00:10:45.360 iops : min=21774, max=23444, avg=22736.67, stdev=863.78, samples=3 00:10:45.360 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:45.360 lat (msec) : 2=1.58%, 4=91.50%, 10=6.87% 00:10:45.360 cpu : usr=99.25%, sys=0.00%, ctx=27, majf=0, minf=626 00:10:45.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:45.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.360 issued rwts: total=45819,45541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.360 00:10:45.360 Run status group 0 (all jobs): 00:10:45.360 READ: bw=89.4MiB/s (93.8MB/s), 89.4MiB/s-89.4MiB/s (93.8MB/s-93.8MB/s), io=179MiB (188MB), run=2001-2001msec 00:10:45.360 WRITE: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:45.360 ----------------------------------------------------- 00:10:45.360 Suppressions used: 00:10:45.360 count bytes template 00:10:45.360 1 32 /usr/src/fio/parse.c 00:10:45.360 1 8 libtcmalloc_minimal.so 00:10:45.360 ----------------------------------------------------- 00:10:45.360 00:10:45.360 11:06:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:45.360 11:06:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:45.360 11:06:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:45.360 11:06:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:45.360 11:06:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:45.360 11:06:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:45.360 11:06:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:45.360 11:06:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:45.360 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:45.361 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:45.361 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:45.361 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:45.361 11:06:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:45.361 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:45.361 fio-3.35 00:10:45.361 Starting 1 thread 00:10:50.642 00:10:50.642 test: (groupid=0, jobs=1): err= 0: pid=76588: Wed Nov 27 11:06:18 2024 00:10:50.642 read: IOPS=21.1k, BW=82.4MiB/s (86.4MB/s)(165MiB/2001msec) 00:10:50.642 slat (nsec): min=3396, max=72961, avg=5133.20, stdev=2480.11 00:10:50.642 clat (usec): min=268, max=9804, avg=3025.64, stdev=1064.19 00:10:50.642 lat (usec): min=274, max=9870, avg=3030.77, stdev=1065.32 00:10:50.642 clat percentiles (usec): 00:10:50.642 | 1.00th=[ 1516], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2376], 00:10:50.642 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2769], 00:10:50.642 | 70.00th=[ 2966], 80.00th=[ 3556], 90.00th=[ 4686], 95.00th=[ 5407], 00:10:50.642 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 8029], 99.95th=[ 8848], 00:10:50.642 | 99.99th=[ 9634] 00:10:50.642 bw ( KiB/s): min=77224, max=89232, per=100.00%, avg=85120.00, stdev=6840.10, samples=3 00:10:50.642 iops : min=19306, max=22308, avg=21280.00, stdev=1710.03, samples=3 00:10:50.642 write: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(164MiB/2001msec); 0 zone resets 00:10:50.642 slat (nsec): min=3502, max=72648, avg=5333.70, stdev=2431.43 00:10:50.642 clat (usec): min=258, max=9735, avg=3039.69, stdev=1058.42 00:10:50.642 lat (usec): min=263, max=9749, avg=3045.02, stdev=1059.55 00:10:50.642 clat percentiles (usec): 00:10:50.642 | 1.00th=[ 1549], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2376], 00:10:50.642 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2769], 00:10:50.642 | 70.00th=[ 2999], 80.00th=[ 3556], 90.00th=[ 4686], 95.00th=[ 5407], 00:10:50.642 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 8225], 99.95th=[ 8979], 00:10:50.642 | 99.99th=[ 9634] 00:10:50.642 bw ( KiB/s): min=78304, max=89232, per=100.00%, avg=85277.33, stdev=6057.19, samples=3 00:10:50.642 iops : min=19576, max=22308, avg=21319.33, stdev=1514.30, samples=3 00:10:50.642 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.10% 00:10:50.642 lat (msec) : 2=2.89%, 4=81.48%, 10=15.52% 00:10:50.642 cpu : usr=98.90%, sys=0.25%, ctx=33, majf=0, minf=625 00:10:50.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:50.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.642 issued rwts: total=42218,41958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.642 00:10:50.642 Run status group 0 (all jobs): 00:10:50.642 READ: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s), io=165MiB (173MB), run=2001-2001msec 00:10:50.642 WRITE: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=164MiB (172MB), run=2001-2001msec 00:10:50.642 ----------------------------------------------------- 00:10:50.642 Suppressions used: 00:10:50.642 count bytes template 00:10:50.642 1 32 /usr/src/fio/parse.c 00:10:50.642 1 8 libtcmalloc_minimal.so 00:10:50.642 ----------------------------------------------------- 00:10:50.642 00:10:50.642 ************************************ 00:10:50.642 END TEST nvme_fio 00:10:50.642 ************************************ 00:10:50.642 11:06:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:50.642 11:06:19 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:50.642 00:10:50.642 real 0m27.049s 00:10:50.642 user 0m16.265s 00:10:50.642 sys 0m19.812s 00:10:50.642 11:06:19 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.643 11:06:19 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:50.643 00:10:50.643 real 1m34.473s 00:10:50.643 user 3m30.640s 00:10:50.643 sys 0m29.992s 00:10:50.643 11:06:19 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.643 ************************************ 00:10:50.643 END TEST nvme 00:10:50.643 ************************************ 00:10:50.643 11:06:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.643 11:06:19 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:50.643 11:06:19 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:50.643 11:06:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:50.643 11:06:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.643 11:06:19 -- common/autotest_common.sh@10 -- # set +x 00:10:50.643 ************************************ 00:10:50.643 START TEST nvme_scc 00:10:50.643 ************************************ 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:50.643 * Looking for test storage... 00:10:50.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:50.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.643 --rc genhtml_branch_coverage=1 00:10:50.643 --rc genhtml_function_coverage=1 00:10:50.643 --rc genhtml_legend=1 00:10:50.643 --rc geninfo_all_blocks=1 00:10:50.643 --rc geninfo_unexecuted_blocks=1 00:10:50.643 00:10:50.643 ' 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:50.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.643 --rc genhtml_branch_coverage=1 00:10:50.643 --rc genhtml_function_coverage=1 00:10:50.643 --rc genhtml_legend=1 00:10:50.643 --rc geninfo_all_blocks=1 00:10:50.643 --rc geninfo_unexecuted_blocks=1 00:10:50.643 00:10:50.643 ' 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:50.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.643 --rc genhtml_branch_coverage=1 00:10:50.643 --rc genhtml_function_coverage=1 00:10:50.643 --rc genhtml_legend=1 00:10:50.643 --rc geninfo_all_blocks=1 00:10:50.643 --rc geninfo_unexecuted_blocks=1 00:10:50.643 00:10:50.643 ' 00:10:50.643 11:06:19 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:50.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.643 --rc genhtml_branch_coverage=1 00:10:50.643 --rc genhtml_function_coverage=1 00:10:50.643 --rc genhtml_legend=1 00:10:50.643 --rc geninfo_all_blocks=1 00:10:50.643 --rc geninfo_unexecuted_blocks=1 00:10:50.643 00:10:50.643 ' 00:10:50.643 11:06:19 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.643 11:06:19 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.643 11:06:19 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.643 11:06:19 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.643 11:06:19 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.643 11:06:19 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:50.643 11:06:19 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:50.643 11:06:19 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:50.643 11:06:19 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.643 11:06:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:50.643 11:06:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:50.643 11:06:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:50.643 11:06:19 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:50.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.165 Waiting for block devices as requested 00:10:51.165 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.165 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.426 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.426 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.756 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:56.756 11:06:25 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:56.757 11:06:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:56.757 11:06:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:56.757 11:06:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:56.757 11:06:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.757 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:56.758 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:56.759 11:06:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:56.760 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:56.761 11:06:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:56.761 11:06:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:56.761 11:06:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:56.761 11:06:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.761 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:56.762 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.763 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:56.764 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.765 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:56.766 11:06:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:56.766 11:06:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:56.766 11:06:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:56.766 11:06:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.766 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:56.767 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:56.768 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.769 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.770 11:06:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.771 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.772 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:56.773 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:56.774 11:06:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:56.774 11:06:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:56.774 11:06:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:56.774 11:06:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.774 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.775 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.776 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:56.777 11:06:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:56.777 11:06:25 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:56.777 11:06:25 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:57.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.922 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.922 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.922 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.922 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.923 11:06:26 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:57.923 11:06:26 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.923 11:06:26 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.923 11:06:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:57.923 ************************************ 00:10:57.923 START TEST nvme_simple_copy 00:10:57.923 ************************************ 00:10:57.923 11:06:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:58.183 Initializing NVMe Controllers 00:10:58.183 Attaching to 0000:00:10.0 00:10:58.183 Controller supports SCC. Attached to 0000:00:10.0 00:10:58.183 Namespace ID: 1 size: 6GB 00:10:58.183 Initialization complete. 00:10:58.183 00:10:58.183 Controller QEMU NVMe Ctrl (12340 ) 00:10:58.183 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:58.183 Namespace Block Size:4096 00:10:58.183 Writing LBAs 0 to 63 with Random Data 00:10:58.183 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:58.183 LBAs matching Written Data: 64 00:10:58.183 ************************************ 00:10:58.183 END TEST nvme_simple_copy 00:10:58.183 ************************************ 00:10:58.183 00:10:58.183 real 0m0.238s 00:10:58.183 user 0m0.075s 00:10:58.183 sys 0m0.062s 00:10:58.183 11:06:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.183 11:06:26 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:58.183 ************************************ 00:10:58.183 END TEST nvme_scc 00:10:58.183 ************************************ 00:10:58.183 00:10:58.183 real 0m7.618s 00:10:58.183 user 0m0.997s 00:10:58.183 sys 0m1.341s 00:10:58.183 11:06:26 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.183 11:06:26 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:58.183 11:06:26 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:58.183 11:06:26 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:58.183 11:06:26 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:58.183 11:06:26 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:58.183 11:06:26 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:58.183 11:06:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.183 11:06:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.183 11:06:26 -- common/autotest_common.sh@10 -- # set +x 00:10:58.183 ************************************ 00:10:58.183 START TEST nvme_fdp 00:10:58.183 ************************************ 00:10:58.183 11:06:26 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:58.183 * Looking for test storage... 00:10:58.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:58.183 11:06:27 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:58.183 11:06:27 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:58.183 11:06:27 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:58.445 11:06:27 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:58.445 11:06:27 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.445 11:06:27 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:58.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.445 --rc genhtml_branch_coverage=1 00:10:58.445 --rc genhtml_function_coverage=1 00:10:58.445 --rc genhtml_legend=1 00:10:58.445 --rc geninfo_all_blocks=1 00:10:58.445 --rc geninfo_unexecuted_blocks=1 00:10:58.445 00:10:58.445 ' 00:10:58.445 11:06:27 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:58.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.445 --rc genhtml_branch_coverage=1 00:10:58.445 --rc genhtml_function_coverage=1 00:10:58.445 --rc genhtml_legend=1 00:10:58.445 --rc geninfo_all_blocks=1 00:10:58.445 --rc geninfo_unexecuted_blocks=1 00:10:58.445 00:10:58.445 ' 00:10:58.445 11:06:27 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:58.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.445 --rc genhtml_branch_coverage=1 00:10:58.445 --rc genhtml_function_coverage=1 00:10:58.445 --rc genhtml_legend=1 00:10:58.445 --rc geninfo_all_blocks=1 00:10:58.445 --rc geninfo_unexecuted_blocks=1 00:10:58.445 00:10:58.445 ' 00:10:58.445 11:06:27 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:58.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.445 --rc genhtml_branch_coverage=1 00:10:58.445 --rc genhtml_function_coverage=1 00:10:58.445 --rc genhtml_legend=1 00:10:58.445 --rc geninfo_all_blocks=1 00:10:58.445 --rc geninfo_unexecuted_blocks=1 00:10:58.445 00:10:58.445 ' 00:10:58.445 11:06:27 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.445 11:06:27 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.445 11:06:27 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.445 11:06:27 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.445 11:06:27 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.445 11:06:27 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:58.445 11:06:27 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:58.445 11:06:27 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:58.445 11:06:27 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:58.445 11:06:27 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:58.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.707 Waiting for block devices as requested 00:10:58.968 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.968 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.968 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.270 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:04.270 11:06:32 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:04.270 11:06:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:04.270 11:06:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:04.270 11:06:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:04.270 11:06:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:04.270 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:04.271 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:04.272 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:04.273 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:04.274 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:04.275 11:06:32 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:04.275 11:06:32 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:04.275 11:06:32 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:04.275 11:06:32 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:04.275 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.276 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:04.277 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:04.278 11:06:32 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:04.278 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.279 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:04.280 11:06:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:04.280 11:06:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:04.280 11:06:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:04.280 11:06:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.280 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:04.281 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.282 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.283 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:04.284 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:04.285 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.286 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:04.287 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:04.548 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:04.549 11:06:33 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:04.549 11:06:33 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:04.549 11:06:33 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:04.549 11:06:33 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.549 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.550 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.551 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:04.552 11:06:33 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:04.552 11:06:33 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:04.552 11:06:33 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.384 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.384 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.384 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.669 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.669 11:06:34 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:05.669 11:06:34 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.669 11:06:34 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.669 11:06:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:05.669 ************************************ 00:11:05.669 START TEST nvme_flexible_data_placement 00:11:05.669 ************************************ 00:11:05.669 11:06:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:05.931 Initializing NVMe Controllers 00:11:05.931 Attaching to 0000:00:13.0 00:11:05.931 Controller supports FDP Attached to 0000:00:13.0 00:11:05.931 Namespace ID: 1 Endurance Group ID: 1 00:11:05.931 Initialization complete. 00:11:05.931 00:11:05.931 ================================== 00:11:05.931 == FDP tests for Namespace: #01 == 00:11:05.931 ================================== 00:11:05.931 00:11:05.931 Get Feature: FDP: 00:11:05.931 ================= 00:11:05.931 Enabled: Yes 00:11:05.931 FDP configuration Index: 0 00:11:05.931 00:11:05.931 FDP configurations log page 00:11:05.931 =========================== 00:11:05.931 Number of FDP configurations: 1 00:11:05.931 Version: 0 00:11:05.931 Size: 112 00:11:05.931 FDP Configuration Descriptor: 0 00:11:05.931 Descriptor Size: 96 00:11:05.931 Reclaim Group Identifier format: 2 00:11:05.931 FDP Volatile Write Cache: Not Present 00:11:05.931 FDP Configuration: Valid 00:11:05.931 Vendor Specific Size: 0 00:11:05.931 Number of Reclaim Groups: 2 00:11:05.931 Number of Recalim Unit Handles: 8 00:11:05.931 Max Placement Identifiers: 128 00:11:05.931 Number of Namespaces Suppprted: 256 00:11:05.931 Reclaim unit Nominal Size: 6000000 bytes 00:11:05.931 Estimated Reclaim Unit Time Limit: Not Reported 00:11:05.931 RUH Desc #000: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #001: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #002: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #003: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #004: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #005: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #006: RUH Type: Initially Isolated 00:11:05.931 RUH Desc #007: RUH Type: Initially Isolated 00:11:05.931 00:11:05.931 FDP reclaim unit handle usage log page 00:11:05.931 ====================================== 00:11:05.931 Number of Reclaim Unit Handles: 8 00:11:05.931 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:05.931 RUH Usage Desc #001: RUH Attributes: Unused 00:11:05.931 RUH Usage Desc #002: RUH Attributes: Unused 00:11:05.931 RUH Usage Desc #003: RUH Attributes: Unused 00:11:05.931 RUH Usage Desc #004: RUH Attributes: Unused 00:11:05.931 RUH Usage Desc #005: RUH Attributes: Unused 00:11:05.931 RUH Usage Desc #006: RUH Attributes: Unused 00:11:05.931 RUH Usage Desc #007: RUH Attributes: Unused 00:11:05.931 00:11:05.931 FDP statistics log page 00:11:05.931 ======================= 00:11:05.931 Host bytes with metadata written: 2025984000 00:11:05.931 Media bytes with metadata written: 2026319872 00:11:05.931 Media bytes erased: 0 00:11:05.931 00:11:05.931 FDP Reclaim unit handle status 00:11:05.931 ============================== 00:11:05.931 Number of RUHS descriptors: 2 00:11:05.931 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000053df 00:11:05.931 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:05.931 00:11:05.931 FDP write on placement id: 0 success 00:11:05.931 00:11:05.931 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:05.931 00:11:05.931 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:05.931 00:11:05.931 Get Feature: FDP Events for Placement handle: #0 00:11:05.931 ======================== 00:11:05.931 Number of FDP Events: 6 00:11:05.931 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:05.931 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:05.931 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:05.931 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:05.931 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:05.931 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:05.931 00:11:05.931 FDP events log page 00:11:05.931 =================== 00:11:05.931 Number of FDP events: 1 00:11:05.931 FDP Event #0: 00:11:05.931 Event Type: RU Not Written to Capacity 00:11:05.931 Placement Identifier: Valid 00:11:05.931 NSID: Valid 00:11:05.931 Location: Valid 00:11:05.931 Placement Identifier: 0 00:11:05.931 Event Timestamp: 6 00:11:05.931 Namespace Identifier: 1 00:11:05.931 Reclaim Group Identifier: 0 00:11:05.931 Reclaim Unit Handle Identifier: 0 00:11:05.931 00:11:05.931 FDP test passed 00:11:05.931 00:11:05.931 real 0m0.227s 00:11:05.931 user 0m0.065s 00:11:05.931 sys 0m0.060s 00:11:05.931 11:06:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.931 11:06:34 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:05.931 ************************************ 00:11:05.931 END TEST nvme_flexible_data_placement 00:11:05.931 ************************************ 00:11:05.931 ************************************ 00:11:05.931 END TEST nvme_fdp 00:11:05.931 ************************************ 00:11:05.931 00:11:05.931 real 0m7.668s 00:11:05.931 user 0m1.096s 00:11:05.931 sys 0m1.331s 00:11:05.931 11:06:34 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.931 11:06:34 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:05.931 11:06:34 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:05.931 11:06:34 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:05.931 11:06:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:05.931 11:06:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.931 11:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:05.931 ************************************ 00:11:05.931 START TEST nvme_rpc 00:11:05.932 ************************************ 00:11:05.932 11:06:34 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:05.932 * Looking for test storage... 00:11:05.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:05.932 11:06:34 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:05.932 11:06:34 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:05.932 11:06:34 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.193 11:06:34 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:06.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.193 --rc genhtml_branch_coverage=1 00:11:06.193 --rc genhtml_function_coverage=1 00:11:06.193 --rc genhtml_legend=1 00:11:06.193 --rc geninfo_all_blocks=1 00:11:06.193 --rc geninfo_unexecuted_blocks=1 00:11:06.193 00:11:06.193 ' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:06.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.193 --rc genhtml_branch_coverage=1 00:11:06.193 --rc genhtml_function_coverage=1 00:11:06.193 --rc genhtml_legend=1 00:11:06.193 --rc geninfo_all_blocks=1 00:11:06.193 --rc geninfo_unexecuted_blocks=1 00:11:06.193 00:11:06.193 ' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:06.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.193 --rc genhtml_branch_coverage=1 00:11:06.193 --rc genhtml_function_coverage=1 00:11:06.193 --rc genhtml_legend=1 00:11:06.193 --rc geninfo_all_blocks=1 00:11:06.193 --rc geninfo_unexecuted_blocks=1 00:11:06.193 00:11:06.193 ' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:06.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.193 --rc genhtml_branch_coverage=1 00:11:06.193 --rc genhtml_function_coverage=1 00:11:06.193 --rc genhtml_legend=1 00:11:06.193 --rc geninfo_all_blocks=1 00:11:06.193 --rc geninfo_unexecuted_blocks=1 00:11:06.193 00:11:06.193 ' 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:06.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77952 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77952 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77952 ']' 00:11:06.193 11:06:34 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.193 11:06:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.193 [2024-11-27 11:06:35.017326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:06.193 [2024-11-27 11:06:35.017481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77952 ] 00:11:06.454 [2024-11-27 11:06:35.167420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:06.454 [2024-11-27 11:06:35.221447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.454 [2024-11-27 11:06:35.221506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.029 11:06:35 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.029 11:06:35 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:07.029 11:06:35 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:07.291 Nvme0n1 00:11:07.291 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:07.291 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:07.553 request: 00:11:07.553 { 00:11:07.553 "bdev_name": "Nvme0n1", 00:11:07.553 "filename": "non_existing_file", 00:11:07.553 "method": "bdev_nvme_apply_firmware", 00:11:07.553 "req_id": 1 00:11:07.553 } 00:11:07.553 Got JSON-RPC error response 00:11:07.553 response: 00:11:07.553 { 00:11:07.553 "code": -32603, 00:11:07.553 "message": "open file failed." 00:11:07.553 } 00:11:07.553 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:07.553 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:07.553 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:07.814 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:07.814 11:06:36 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77952 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77952 ']' 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77952 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77952 00:11:07.814 killing process with pid 77952 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77952' 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77952 00:11:07.814 11:06:36 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77952 00:11:08.092 ************************************ 00:11:08.092 END TEST nvme_rpc 00:11:08.092 ************************************ 00:11:08.092 00:11:08.092 real 0m2.208s 00:11:08.092 user 0m4.210s 00:11:08.092 sys 0m0.593s 00:11:08.092 11:06:36 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.092 11:06:36 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.092 11:06:36 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.092 11:06:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:08.092 11:06:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.092 11:06:36 -- common/autotest_common.sh@10 -- # set +x 00:11:08.355 ************************************ 00:11:08.355 START TEST nvme_rpc_timeouts 00:11:08.355 ************************************ 00:11:08.355 11:06:36 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.355 * Looking for test storage... 00:11:08.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.355 11:06:37 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:08.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.355 --rc genhtml_branch_coverage=1 00:11:08.355 --rc genhtml_function_coverage=1 00:11:08.355 --rc genhtml_legend=1 00:11:08.355 --rc geninfo_all_blocks=1 00:11:08.355 --rc geninfo_unexecuted_blocks=1 00:11:08.355 00:11:08.355 ' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:08.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.355 --rc genhtml_branch_coverage=1 00:11:08.355 --rc genhtml_function_coverage=1 00:11:08.355 --rc genhtml_legend=1 00:11:08.355 --rc geninfo_all_blocks=1 00:11:08.355 --rc geninfo_unexecuted_blocks=1 00:11:08.355 00:11:08.355 ' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:08.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.355 --rc genhtml_branch_coverage=1 00:11:08.355 --rc genhtml_function_coverage=1 00:11:08.355 --rc genhtml_legend=1 00:11:08.355 --rc geninfo_all_blocks=1 00:11:08.355 --rc geninfo_unexecuted_blocks=1 00:11:08.355 00:11:08.355 ' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:08.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.355 --rc genhtml_branch_coverage=1 00:11:08.355 --rc genhtml_function_coverage=1 00:11:08.355 --rc genhtml_legend=1 00:11:08.355 --rc geninfo_all_blocks=1 00:11:08.355 --rc geninfo_unexecuted_blocks=1 00:11:08.355 00:11:08.355 ' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78006 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78006 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78038 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78038 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78038 ']' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.355 11:06:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:08.355 11:06:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:08.355 [2024-11-27 11:06:37.206086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:08.355 [2024-11-27 11:06:37.206228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78038 ] 00:11:08.614 [2024-11-27 11:06:37.358328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.614 [2024-11-27 11:06:37.413275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.614 [2024-11-27 11:06:37.413440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.181 Checking default timeout settings: 00:11:09.181 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.181 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:11:09.181 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:09.181 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:09.748 Making settings changes with rpc: 00:11:09.748 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:09.748 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:09.748 Check default vs. modified settings: 00:11:09.748 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:09.748 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78006 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78006 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.007 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.267 Setting action_on_timeout is changed as expected. 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78006 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78006 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.267 Setting timeout_us is changed as expected. 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78006 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78006 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:10.267 Setting timeout_admin_us is changed as expected. 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78006 /tmp/settings_modified_78006 00:11:10.267 11:06:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78038 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78038 ']' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78038 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78038 00:11:10.267 killing process with pid 78038 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78038' 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78038 00:11:10.267 11:06:38 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78038 00:11:10.545 RPC TIMEOUT SETTING TEST PASSED. 00:11:10.545 11:06:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:10.545 00:11:10.545 real 0m2.225s 00:11:10.545 user 0m4.363s 00:11:10.545 sys 0m0.516s 00:11:10.545 ************************************ 00:11:10.545 END TEST nvme_rpc_timeouts 00:11:10.545 ************************************ 00:11:10.545 11:06:39 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.545 11:06:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:10.545 11:06:39 -- spdk/autotest.sh@239 -- # uname -s 00:11:10.545 11:06:39 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:10.545 11:06:39 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:10.545 11:06:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:10.545 11:06:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.545 11:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:10.545 ************************************ 00:11:10.545 START TEST sw_hotplug 00:11:10.545 ************************************ 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:10.545 * Looking for test storage... 00:11:10.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.545 11:06:39 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 11:06:39 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:10.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.545 --rc genhtml_branch_coverage=1 00:11:10.545 --rc genhtml_function_coverage=1 00:11:10.545 --rc genhtml_legend=1 00:11:10.545 --rc geninfo_all_blocks=1 00:11:10.545 --rc geninfo_unexecuted_blocks=1 00:11:10.545 00:11:10.545 ' 00:11:10.545 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:11.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.116 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.116 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.116 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.116 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:11.116 11:06:39 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:11.116 11:06:39 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.640 Waiting for block devices as requested 00:11:11.640 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.901 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.901 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.901 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:17.189 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:17.189 11:06:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:17.189 11:06:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:17.451 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:17.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.451 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:17.712 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:17.971 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.971 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.231 11:06:46 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:18.231 11:06:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78884 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:18.231 11:06:47 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:18.231 11:06:47 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:18.231 11:06:47 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:18.231 11:06:47 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:18.231 11:06:47 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:18.231 11:06:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:18.493 Initializing NVMe Controllers 00:11:18.493 Attaching to 0000:00:10.0 00:11:18.493 Attaching to 0000:00:11.0 00:11:18.493 Attached to 0000:00:11.0 00:11:18.493 Attached to 0000:00:10.0 00:11:18.493 Initialization complete. Starting I/O... 00:11:18.493 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:18.493 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:18.493 00:11:19.437 QEMU NVMe Ctrl (12341 ): 2280 I/Os completed (+2280) 00:11:19.437 QEMU NVMe Ctrl (12340 ): 2280 I/Os completed (+2280) 00:11:19.437 00:11:20.379 QEMU NVMe Ctrl (12341 ): 5016 I/Os completed (+2736) 00:11:20.379 QEMU NVMe Ctrl (12340 ): 5025 I/Os completed (+2745) 00:11:20.379 00:11:21.766 QEMU NVMe Ctrl (12341 ): 7860 I/Os completed (+2844) 00:11:21.766 QEMU NVMe Ctrl (12340 ): 7873 I/Os completed (+2848) 00:11:21.766 00:11:22.339 QEMU NVMe Ctrl (12341 ): 11868 I/Os completed (+4008) 00:11:22.339 QEMU NVMe Ctrl (12340 ): 11881 I/Os completed (+4008) 00:11:22.339 00:11:23.338 QEMU NVMe Ctrl (12341 ): 16262 I/Os completed (+4394) 00:11:23.338 QEMU NVMe Ctrl (12340 ): 16285 I/Os completed (+4404) 00:11:23.338 00:11:24.292 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.292 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.292 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.292 [2024-11-27 11:06:53.015208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:24.292 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:24.292 [2024-11-27 11:06:53.016591] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.016640] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.016654] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.016667] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:24.292 [2024-11-27 11:06:53.017648] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.017684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.017694] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.017706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.292 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.292 [2024-11-27 11:06:53.036287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:24.292 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:24.292 [2024-11-27 11:06:53.037094] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.037127] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.292 [2024-11-27 11:06:53.037141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.293 [2024-11-27 11:06:53.037153] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.293 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:24.293 [2024-11-27 11:06:53.038048] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.293 [2024-11-27 11:06:53.038074] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.293 [2024-11-27 11:06:53.038088] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.293 [2024-11-27 11:06:53.038098] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.293 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:24.293 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.293 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:24.293 EAL: Scan for (pci) bus failed. 00:11:24.293 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.293 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.293 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.554 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.554 Attaching to 0000:00:10.0 00:11:24.554 Attached to 0000:00:10.0 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.554 11:06:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.554 Attaching to 0000:00:11.0 00:11:24.554 Attached to 0000:00:11.0 00:11:25.497 QEMU NVMe Ctrl (12340 ): 4168 I/Os completed (+4168) 00:11:25.497 QEMU NVMe Ctrl (12341 ): 3847 I/Os completed (+3847) 00:11:25.497 00:11:26.441 QEMU NVMe Ctrl (12340 ): 8560 I/Os completed (+4392) 00:11:26.441 QEMU NVMe Ctrl (12341 ): 8243 I/Os completed (+4396) 00:11:26.441 00:11:27.385 QEMU NVMe Ctrl (12340 ): 12969 I/Os completed (+4409) 00:11:27.385 QEMU NVMe Ctrl (12341 ): 12659 I/Os completed (+4416) 00:11:27.385 00:11:28.328 QEMU NVMe Ctrl (12340 ): 17373 I/Os completed (+4404) 00:11:28.329 QEMU NVMe Ctrl (12341 ): 17063 I/Os completed (+4404) 00:11:28.329 00:11:29.715 QEMU NVMe Ctrl (12340 ): 21753 I/Os completed (+4380) 00:11:29.715 QEMU NVMe Ctrl (12341 ): 21443 I/Os completed (+4380) 00:11:29.715 00:11:30.657 QEMU NVMe Ctrl (12340 ): 26133 I/Os completed (+4380) 00:11:30.657 QEMU NVMe Ctrl (12341 ): 25823 I/Os completed (+4380) 00:11:30.657 00:11:31.600 QEMU NVMe Ctrl (12340 ): 30509 I/Os completed (+4376) 00:11:31.600 QEMU NVMe Ctrl (12341 ): 30199 I/Os completed (+4376) 00:11:31.600 00:11:32.544 QEMU NVMe Ctrl (12340 ): 34825 I/Os completed (+4316) 00:11:32.544 QEMU NVMe Ctrl (12341 ): 34519 I/Os completed (+4320) 00:11:32.544 00:11:33.486 QEMU NVMe Ctrl (12340 ): 38553 I/Os completed (+3728) 00:11:33.486 QEMU NVMe Ctrl (12341 ): 38247 I/Os completed (+3728) 00:11:33.486 00:11:34.430 QEMU NVMe Ctrl (12340 ): 42273 I/Os completed (+3720) 00:11:34.430 QEMU NVMe Ctrl (12341 ): 41967 I/Os completed (+3720) 00:11:34.430 00:11:35.370 QEMU NVMe Ctrl (12340 ): 46272 I/Os completed (+3999) 00:11:35.370 QEMU NVMe Ctrl (12341 ): 45968 I/Os completed (+4001) 00:11:35.370 00:11:36.743 QEMU NVMe Ctrl (12340 ): 50498 I/Os completed (+4226) 00:11:36.743 QEMU NVMe Ctrl (12341 ): 50176 I/Os completed (+4208) 00:11:36.743 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.743 [2024-11-27 11:07:05.328664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:36.743 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:36.743 [2024-11-27 11:07:05.329474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.329503] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.329514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.329529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:36.743 [2024-11-27 11:07:05.330509] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.330541] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.330551] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.330564] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.743 [2024-11-27 11:07:05.349756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:36.743 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:36.743 [2024-11-27 11:07:05.350543] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.350573] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.350589] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.350601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:36.743 [2024-11-27 11:07:05.351481] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.351507] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.351521] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 [2024-11-27 11:07:05.351530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.743 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:36.743 EAL: Scan for (pci) bus failed. 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.743 Attaching to 0000:00:10.0 00:11:36.743 Attached to 0000:00:10.0 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.743 11:07:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.743 Attaching to 0000:00:11.0 00:11:36.743 Attached to 0000:00:11.0 00:11:37.676 QEMU NVMe Ctrl (12340 ): 2897 I/Os completed (+2897) 00:11:37.676 QEMU NVMe Ctrl (12341 ): 2500 I/Os completed (+2500) 00:11:37.676 00:11:38.619 QEMU NVMe Ctrl (12340 ): 6636 I/Os completed (+3739) 00:11:38.619 QEMU NVMe Ctrl (12341 ): 6261 I/Os completed (+3761) 00:11:38.619 00:11:39.561 QEMU NVMe Ctrl (12340 ): 9613 I/Os completed (+2977) 00:11:39.561 QEMU NVMe Ctrl (12341 ): 9246 I/Os completed (+2985) 00:11:39.561 00:11:40.495 QEMU NVMe Ctrl (12340 ): 13342 I/Os completed (+3729) 00:11:40.495 QEMU NVMe Ctrl (12341 ): 12985 I/Os completed (+3739) 00:11:40.495 00:11:41.428 QEMU NVMe Ctrl (12340 ): 17516 I/Os completed (+4174) 00:11:41.428 QEMU NVMe Ctrl (12341 ): 17147 I/Os completed (+4162) 00:11:41.428 00:11:42.363 QEMU NVMe Ctrl (12340 ): 21688 I/Os completed (+4172) 00:11:42.363 QEMU NVMe Ctrl (12341 ): 21339 I/Os completed (+4192) 00:11:42.363 00:11:43.736 QEMU NVMe Ctrl (12340 ): 25843 I/Os completed (+4155) 00:11:43.736 QEMU NVMe Ctrl (12341 ): 25512 I/Os completed (+4173) 00:11:43.736 00:11:44.669 QEMU NVMe Ctrl (12340 ): 29987 I/Os completed (+4144) 00:11:44.669 QEMU NVMe Ctrl (12341 ): 29664 I/Os completed (+4152) 00:11:44.669 00:11:45.612 QEMU NVMe Ctrl (12340 ): 33757 I/Os completed (+3770) 00:11:45.612 QEMU NVMe Ctrl (12341 ): 33397 I/Os completed (+3733) 00:11:45.612 00:11:46.557 QEMU NVMe Ctrl (12340 ): 36801 I/Os completed (+3044) 00:11:46.557 QEMU NVMe Ctrl (12341 ): 36454 I/Os completed (+3057) 00:11:46.557 00:11:47.503 QEMU NVMe Ctrl (12340 ): 39773 I/Os completed (+2972) 00:11:47.503 QEMU NVMe Ctrl (12341 ): 39425 I/Os completed (+2971) 00:11:47.503 00:11:48.446 QEMU NVMe Ctrl (12340 ): 42809 I/Os completed (+3036) 00:11:48.446 QEMU NVMe Ctrl (12341 ): 42464 I/Os completed (+3039) 00:11:48.446 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.018 [2024-11-27 11:07:17.612049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:49.018 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:49.018 [2024-11-27 11:07:17.613307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.613518] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.613542] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.613562] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:49.018 [2024-11-27 11:07:17.615044] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.615099] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.615115] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.615131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.018 [2024-11-27 11:07:17.635144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:49.018 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:49.018 [2024-11-27 11:07:17.636225] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.636266] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.636283] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.636297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:49.018 [2024-11-27 11:07:17.637557] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.637593] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.637610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 [2024-11-27 11:07:17.637623] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.018 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:49.018 EAL: Scan for (pci) bus failed. 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.018 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.018 Attaching to 0000:00:10.0 00:11:49.018 Attached to 0000:00:10.0 00:11:49.278 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.278 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.278 11:07:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.278 Attaching to 0000:00:11.0 00:11:49.278 Attached to 0000:00:11.0 00:11:49.278 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:49.278 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:49.278 [2024-11-27 11:07:17.952511] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:01.510 11:07:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:01.510 11:07:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.510 11:07:29 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.93 00:12:01.510 11:07:29 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.93 00:12:01.510 11:07:29 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:01.510 11:07:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:12:01.510 11:07:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:12:01.510 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 11:07:29 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78884 00:12:08.103 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78884) - No such process 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78884 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79433 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:08.103 11:07:35 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79433 00:12:08.103 11:07:35 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79433 ']' 00:12:08.103 11:07:35 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.103 11:07:35 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.103 11:07:35 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.103 11:07:35 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.103 11:07:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.103 [2024-11-27 11:07:36.041844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:08.103 [2024-11-27 11:07:36.042256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79433 ] 00:12:08.103 [2024-11-27 11:07:36.194501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.103 [2024-11-27 11:07:36.247406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:08.103 11:07:36 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:08.103 11:07:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.674 11:07:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.674 11:07:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.674 11:07:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:14.674 11:07:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.674 [2024-11-27 11:07:42.998077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:14.674 [2024-11-27 11:07:42.999156] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.674 [2024-11-27 11:07:42.999275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.674 [2024-11-27 11:07:42.999298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.674 [2024-11-27 11:07:42.999311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.674 [2024-11-27 11:07:42.999319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.674 [2024-11-27 11:07:42.999326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.674 [2024-11-27 11:07:42.999336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.675 [2024-11-27 11:07:42.999343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.675 [2024-11-27 11:07:42.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.675 [2024-11-27 11:07:42.999357] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.675 [2024-11-27 11:07:42.999365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.675 [2024-11-27 11:07:42.999371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.675 11:07:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.675 11:07:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.675 11:07:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:14.675 11:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.933 [2024-11-27 11:07:43.598087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:14.933 [2024-11-27 11:07:43.599204] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.933 [2024-11-27 11:07:43.599234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.933 [2024-11-27 11:07:43.599244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.933 [2024-11-27 11:07:43.599255] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.933 [2024-11-27 11:07:43.599261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.933 [2024-11-27 11:07:43.599269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.933 [2024-11-27 11:07:43.599275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.933 [2024-11-27 11:07:43.599283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.933 [2024-11-27 11:07:43.599289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.933 [2024-11-27 11:07:43.599297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.933 [2024-11-27 11:07:43.599304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.933 [2024-11-27 11:07:43.599311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.191 11:07:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.191 11:07:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.191 11:07:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:15.191 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.449 11:07:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.647 11:07:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.647 11:07:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.647 11:07:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.647 11:07:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.647 11:07:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.647 11:07:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:27.647 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:27.647 [2024-11-27 11:07:56.398259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:27.647 [2024-11-27 11:07:56.399426] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.647 [2024-11-27 11:07:56.399538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.647 [2024-11-27 11:07:56.399599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.647 [2024-11-27 11:07:56.399646] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.647 [2024-11-27 11:07:56.399667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.647 [2024-11-27 11:07:56.399690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.647 [2024-11-27 11:07:56.399715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.647 [2024-11-27 11:07:56.399764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.647 [2024-11-27 11:07:56.399833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.647 [2024-11-27 11:07:56.399857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:27.647 [2024-11-27 11:07:56.399874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.647 [2024-11-27 11:07:56.399938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.214 [2024-11-27 11:07:56.798263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:28.214 [2024-11-27 11:07:56.799389] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.214 [2024-11-27 11:07:56.799493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.214 [2024-11-27 11:07:56.799552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.214 [2024-11-27 11:07:56.799619] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.214 [2024-11-27 11:07:56.799638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.214 [2024-11-27 11:07:56.799662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.214 [2024-11-27 11:07:56.799684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.214 [2024-11-27 11:07:56.799701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.214 [2024-11-27 11:07:56.799758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.214 [2024-11-27 11:07:56.799784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.214 [2024-11-27 11:07:56.799799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.214 [2024-11-27 11:07:56.799823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.214 11:07:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.214 11:07:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.214 11:07:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.214 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:28.215 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:28.215 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.215 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.215 11:07:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:28.215 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:28.215 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.215 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.215 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.215 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:28.473 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:28.473 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.473 11:07:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.670 11:08:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.670 11:08:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.670 11:08:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.670 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.671 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.671 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.671 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.671 11:08:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.671 11:08:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 11:08:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.671 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:40.671 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:40.671 [2024-11-27 11:08:09.298443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:40.671 [2024-11-27 11:08:09.299497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.671 [2024-11-27 11:08:09.299528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.671 [2024-11-27 11:08:09.299542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.671 [2024-11-27 11:08:09.299553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.671 [2024-11-27 11:08:09.299562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.671 [2024-11-27 11:08:09.299569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.671 [2024-11-27 11:08:09.299577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.671 [2024-11-27 11:08:09.299583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.671 [2024-11-27 11:08:09.299591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.671 [2024-11-27 11:08:09.299597] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.671 [2024-11-27 11:08:09.299604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.671 [2024-11-27 11:08:09.299611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.929 [2024-11-27 11:08:09.698445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:40.929 [2024-11-27 11:08:09.699544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.929 [2024-11-27 11:08:09.699575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.929 [2024-11-27 11:08:09.699584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.929 [2024-11-27 11:08:09.699595] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.929 [2024-11-27 11:08:09.699601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.929 [2024-11-27 11:08:09.699611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.929 [2024-11-27 11:08:09.699617] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.929 [2024-11-27 11:08:09.699625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.929 [2024-11-27 11:08:09.699631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.929 [2024-11-27 11:08:09.699638] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.929 [2024-11-27 11:08:09.699644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.929 [2024-11-27 11:08:09.699652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.929 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:40.929 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.929 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.929 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.929 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.929 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.929 11:08:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.929 11:08:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.929 11:08:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.188 11:08:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:41.188 11:08:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:41.188 11:08:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.188 11:08:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:53.403 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:53.403 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:53.403 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:53.403 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.403 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.403 11:08:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.403 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.403 11:08:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.403 11:08:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.15 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.15 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:12:53.404 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:53.404 11:08:22 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:53.404 11:08:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.963 11:08:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.963 11:08:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 11:08:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:59.963 [2024-11-27 11:08:28.181588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:59.963 [2024-11-27 11:08:28.182454] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.182551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.182633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.182684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.182705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.182730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.182782] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.182848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.182874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.182910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.182928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.182986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.581592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:59.963 [2024-11-27 11:08:28.582656] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.582759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.582818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.582846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.582924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.582953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.583064] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.583126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.583152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 [2024-11-27 11:08:28.583176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.963 [2024-11-27 11:08:28.583224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.963 [2024-11-27 11:08:28.583255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.963 11:08:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.963 11:08:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.963 11:08:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:59.963 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.222 11:08:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.436 11:08:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.436 11:08:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.436 11:08:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:12.436 [2024-11-27 11:08:40.981783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:12.436 [2024-11-27 11:08:40.982725] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.436 [2024-11-27 11:08:40.982810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.436 [2024-11-27 11:08:40.982935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.436 [2024-11-27 11:08:40.983035] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.436 [2024-11-27 11:08:40.983056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.436 [2024-11-27 11:08:40.983080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.436 [2024-11-27 11:08:40.983130] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.436 [2024-11-27 11:08:40.983148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.436 [2024-11-27 11:08:40.983187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.436 [2024-11-27 11:08:40.983210] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.436 [2024-11-27 11:08:40.983228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.436 [2024-11-27 11:08:40.983305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.436 11:08:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.436 11:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.436 11:08:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.436 11:08:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.436 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:12.436 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:12.697 [2024-11-27 11:08:41.381786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:12.697 [2024-11-27 11:08:41.382614] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.697 [2024-11-27 11:08:41.382722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.697 [2024-11-27 11:08:41.382784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.697 [2024-11-27 11:08:41.382840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.697 [2024-11-27 11:08:41.382859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.697 [2024-11-27 11:08:41.382924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.697 [2024-11-27 11:08:41.382948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.697 [2024-11-27 11:08:41.382994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.697 [2024-11-27 11:08:41.383020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.697 [2024-11-27 11:08:41.383289] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.697 [2024-11-27 11:08:41.383606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.697 [2024-11-27 11:08:41.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.697 11:08:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.697 11:08:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.697 11:08:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:12.697 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:12.959 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:13.219 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:13.219 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.219 11:08:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.415 11:08:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.415 11:08:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 11:08:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.415 11:08:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.415 11:08:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.415 11:08:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:25.415 11:08:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:25.415 [2024-11-27 11:08:53.982003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:25.415 [2024-11-27 11:08:53.982780] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.415 [2024-11-27 11:08:53.982799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.415 [2024-11-27 11:08:53.982810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.415 [2024-11-27 11:08:53.982821] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.415 [2024-11-27 11:08:53.982831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.415 [2024-11-27 11:08:53.982839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.415 [2024-11-27 11:08:53.982846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.415 [2024-11-27 11:08:53.982853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.415 [2024-11-27 11:08:53.982861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.415 [2024-11-27 11:08:53.982867] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.415 [2024-11-27 11:08:53.982874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.415 [2024-11-27 11:08:53.982881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.673 [2024-11-27 11:08:54.382013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:25.673 [2024-11-27 11:08:54.382934] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.673 [2024-11-27 11:08:54.382961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.673 [2024-11-27 11:08:54.382971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.673 [2024-11-27 11:08:54.382982] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.673 [2024-11-27 11:08:54.382989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.673 [2024-11-27 11:08:54.382997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.673 [2024-11-27 11:08:54.383005] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.673 [2024-11-27 11:08:54.383015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.673 [2024-11-27 11:08:54.383022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.673 [2024-11-27 11:08:54.383030] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.673 [2024-11-27 11:08:54.383036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.674 [2024-11-27 11:08:54.383044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.674 11:08:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.674 11:08:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.674 11:08:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:25.674 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:25.932 11:08:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.69 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.69 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.69 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.69 2 00:13:38.223 remove_attach_helper took 44.69s to complete (handling 2 nvme drive(s)) 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:38.223 11:09:06 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79433 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79433 ']' 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79433 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79433 00:13:38.223 killing process with pid 79433 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79433' 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79433 00:13:38.223 11:09:06 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79433 00:13:38.223 11:09:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:38.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:39.062 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:39.062 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:39.062 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.062 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.324 00:13:39.324 real 2m28.721s 00:13:39.324 user 1m48.892s 00:13:39.324 sys 0m18.404s 00:13:39.324 11:09:07 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.324 ************************************ 00:13:39.324 END TEST sw_hotplug 00:13:39.324 ************************************ 00:13:39.324 11:09:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:39.324 11:09:08 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:39.324 11:09:08 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:39.324 11:09:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.324 11:09:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.324 11:09:08 -- common/autotest_common.sh@10 -- # set +x 00:13:39.324 ************************************ 00:13:39.324 START TEST nvme_xnvme 00:13:39.324 ************************************ 00:13:39.324 11:09:08 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:39.324 * Looking for test storage... 00:13:39.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:39.324 11:09:08 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:39.324 11:09:08 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:39.324 11:09:08 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:39.324 11:09:08 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.324 11:09:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:39.586 11:09:08 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.586 11:09:08 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:39.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.586 --rc genhtml_branch_coverage=1 00:13:39.586 --rc genhtml_function_coverage=1 00:13:39.586 --rc genhtml_legend=1 00:13:39.586 --rc geninfo_all_blocks=1 00:13:39.586 --rc geninfo_unexecuted_blocks=1 00:13:39.586 00:13:39.586 ' 00:13:39.586 11:09:08 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:39.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.586 --rc genhtml_branch_coverage=1 00:13:39.586 --rc genhtml_function_coverage=1 00:13:39.586 --rc genhtml_legend=1 00:13:39.586 --rc geninfo_all_blocks=1 00:13:39.586 --rc geninfo_unexecuted_blocks=1 00:13:39.586 00:13:39.586 ' 00:13:39.586 11:09:08 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:39.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.586 --rc genhtml_branch_coverage=1 00:13:39.586 --rc genhtml_function_coverage=1 00:13:39.586 --rc genhtml_legend=1 00:13:39.586 --rc geninfo_all_blocks=1 00:13:39.586 --rc geninfo_unexecuted_blocks=1 00:13:39.586 00:13:39.586 ' 00:13:39.586 11:09:08 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:39.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.586 --rc genhtml_branch_coverage=1 00:13:39.586 --rc genhtml_function_coverage=1 00:13:39.586 --rc genhtml_legend=1 00:13:39.586 --rc geninfo_all_blocks=1 00:13:39.586 --rc geninfo_unexecuted_blocks=1 00:13:39.586 00:13:39.586 ' 00:13:39.586 11:09:08 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.586 11:09:08 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.586 11:09:08 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.586 11:09:08 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.586 11:09:08 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.587 11:09:08 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:39.587 11:09:08 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.587 11:09:08 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:39.587 11:09:08 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.587 11:09:08 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.587 11:09:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.587 ************************************ 00:13:39.587 START TEST xnvme_to_malloc_dd_copy 00:13:39.587 ************************************ 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:39.587 11:09:08 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:39.587 { 00:13:39.587 "subsystems": [ 00:13:39.587 { 00:13:39.587 "subsystem": "bdev", 00:13:39.587 "config": [ 00:13:39.587 { 00:13:39.587 "params": { 00:13:39.587 "block_size": 512, 00:13:39.587 "num_blocks": 2097152, 00:13:39.587 "name": "malloc0" 00:13:39.587 }, 00:13:39.587 "method": "bdev_malloc_create" 00:13:39.587 }, 00:13:39.587 { 00:13:39.587 "params": { 00:13:39.587 "io_mechanism": "libaio", 00:13:39.587 "filename": "/dev/nullb0", 00:13:39.587 "name": "null0" 00:13:39.587 }, 00:13:39.587 "method": "bdev_xnvme_create" 00:13:39.587 }, 00:13:39.587 { 00:13:39.587 "method": "bdev_wait_for_examine" 00:13:39.587 } 00:13:39.587 ] 00:13:39.587 } 00:13:39.587 ] 00:13:39.587 } 00:13:39.587 [2024-11-27 11:09:08.341875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:39.587 [2024-11-27 11:09:08.342034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80793 ] 00:13:39.848 [2024-11-27 11:09:08.493697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.849 [2024-11-27 11:09:08.546128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.262  [2024-11-27T11:09:11.090Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-27T11:09:12.030Z] Copying: 450/1024 [MB] (225 MBps) [2024-11-27T11:09:12.966Z] Copying: 698/1024 [MB] (248 MBps) [2024-11-27T11:09:12.966Z] Copying: 1008/1024 [MB] (309 MBps) [2024-11-27T11:09:13.226Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:13:44.343 00:13:44.603 11:09:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:44.603 11:09:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:44.603 11:09:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:44.603 11:09:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:44.603 { 00:13:44.603 "subsystems": [ 00:13:44.603 { 00:13:44.603 "subsystem": "bdev", 00:13:44.603 "config": [ 00:13:44.603 { 00:13:44.603 "params": { 00:13:44.603 "block_size": 512, 00:13:44.603 "num_blocks": 2097152, 00:13:44.603 "name": "malloc0" 00:13:44.603 }, 00:13:44.603 "method": "bdev_malloc_create" 00:13:44.603 }, 00:13:44.603 { 00:13:44.603 "params": { 00:13:44.603 "io_mechanism": "libaio", 00:13:44.603 "filename": "/dev/nullb0", 00:13:44.603 "name": "null0" 00:13:44.603 }, 00:13:44.603 "method": "bdev_xnvme_create" 00:13:44.603 }, 00:13:44.603 { 00:13:44.603 "method": "bdev_wait_for_examine" 00:13:44.603 } 00:13:44.603 ] 00:13:44.603 } 00:13:44.603 ] 00:13:44.603 } 00:13:44.603 [2024-11-27 11:09:13.286662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:44.603 [2024-11-27 11:09:13.286780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80858 ] 00:13:44.603 [2024-11-27 11:09:13.434998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.603 [2024-11-27 11:09:13.477491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.979  [2024-11-27T11:09:15.797Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-27T11:09:16.742Z] Copying: 624/1024 [MB] (312 MBps) [2024-11-27T11:09:17.309Z] Copying: 937/1024 [MB] (312 MBps) [2024-11-27T11:09:17.568Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:13:48.685 00:13:48.685 11:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:48.685 11:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:48.685 11:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:48.685 11:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:48.685 11:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:48.685 11:09:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:48.685 { 00:13:48.685 "subsystems": [ 00:13:48.685 { 00:13:48.685 "subsystem": "bdev", 00:13:48.685 "config": [ 00:13:48.685 { 00:13:48.685 "params": { 00:13:48.685 "block_size": 512, 00:13:48.685 "num_blocks": 2097152, 00:13:48.685 "name": "malloc0" 00:13:48.685 }, 00:13:48.685 "method": "bdev_malloc_create" 00:13:48.685 }, 00:13:48.685 { 00:13:48.685 "params": { 00:13:48.685 "io_mechanism": "io_uring", 00:13:48.685 "filename": "/dev/nullb0", 00:13:48.685 "name": "null0" 00:13:48.685 }, 00:13:48.685 "method": "bdev_xnvme_create" 00:13:48.685 }, 00:13:48.685 { 00:13:48.685 "method": "bdev_wait_for_examine" 00:13:48.685 } 00:13:48.685 ] 00:13:48.685 } 00:13:48.685 ] 00:13:48.685 } 00:13:48.685 [2024-11-27 11:09:17.395452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:48.685 [2024-11-27 11:09:17.395562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80912 ] 00:13:48.685 [2024-11-27 11:09:17.542941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.943 [2024-11-27 11:09:17.586217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.327  [2024-11-27T11:09:20.142Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-27T11:09:21.077Z] Copying: 636/1024 [MB] (318 MBps) [2024-11-27T11:09:21.077Z] Copying: 955/1024 [MB] (318 MBps) [2024-11-27T11:09:21.644Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:13:52.761 00:13:52.761 11:09:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:52.761 11:09:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:52.761 11:09:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:52.761 11:09:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:52.761 { 00:13:52.761 "subsystems": [ 00:13:52.761 { 00:13:52.761 "subsystem": "bdev", 00:13:52.761 "config": [ 00:13:52.761 { 00:13:52.761 "params": { 00:13:52.761 "block_size": 512, 00:13:52.761 "num_blocks": 2097152, 00:13:52.761 "name": "malloc0" 00:13:52.761 }, 00:13:52.761 "method": "bdev_malloc_create" 00:13:52.761 }, 00:13:52.761 { 00:13:52.761 "params": { 00:13:52.761 "io_mechanism": "io_uring", 00:13:52.761 "filename": "/dev/nullb0", 00:13:52.761 "name": "null0" 00:13:52.761 }, 00:13:52.761 "method": "bdev_xnvme_create" 00:13:52.761 }, 00:13:52.761 { 00:13:52.761 "method": "bdev_wait_for_examine" 00:13:52.761 } 00:13:52.762 ] 00:13:52.762 } 00:13:52.762 ] 00:13:52.762 } 00:13:52.762 [2024-11-27 11:09:21.423842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:52.762 [2024-11-27 11:09:21.423967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80961 ] 00:13:52.762 [2024-11-27 11:09:21.564360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.762 [2024-11-27 11:09:21.604581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.138  [2024-11-27T11:09:23.959Z] Copying: 323/1024 [MB] (323 MBps) [2024-11-27T11:09:24.892Z] Copying: 646/1024 [MB] (323 MBps) [2024-11-27T11:09:25.150Z] Copying: 970/1024 [MB] (323 MBps) [2024-11-27T11:09:25.409Z] Copying: 1024/1024 [MB] (average 323 MBps) 00:13:56.526 00:13:56.526 11:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:56.526 11:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:56.526 00:13:56.526 real 0m17.101s 00:13:56.526 user 0m14.162s 00:13:56.526 sys 0m2.440s 00:13:56.526 ************************************ 00:13:56.526 11:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.526 11:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:56.526 END TEST xnvme_to_malloc_dd_copy 00:13:56.526 ************************************ 00:13:56.526 11:09:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:56.526 11:09:25 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.526 11:09:25 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.526 11:09:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.526 ************************************ 00:13:56.526 START TEST xnvme_bdevperf 00:13:56.526 ************************************ 00:13:56.526 11:09:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:56.526 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:56.526 11:09:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:56.526 11:09:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:56.786 11:09:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.786 { 00:13:56.786 "subsystems": [ 00:13:56.786 { 00:13:56.786 "subsystem": "bdev", 00:13:56.786 "config": [ 00:13:56.786 { 00:13:56.786 "params": { 00:13:56.786 "io_mechanism": "libaio", 00:13:56.786 "filename": "/dev/nullb0", 00:13:56.786 "name": "null0" 00:13:56.786 }, 00:13:56.786 "method": "bdev_xnvme_create" 00:13:56.786 }, 00:13:56.786 { 00:13:56.786 "method": "bdev_wait_for_examine" 00:13:56.786 } 00:13:56.786 ] 00:13:56.786 } 00:13:56.786 ] 00:13:56.786 } 00:13:56.786 [2024-11-27 11:09:25.481885] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:56.786 [2024-11-27 11:09:25.482011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81038 ] 00:13:56.786 [2024-11-27 11:09:25.631378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.045 [2024-11-27 11:09:25.674678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.045 Running I/O for 5 seconds... 00:13:58.914 208256.00 IOPS, 813.50 MiB/s [2024-11-27T11:09:29.172Z] 208576.00 IOPS, 814.75 MiB/s [2024-11-27T11:09:30.115Z] 208576.00 IOPS, 814.75 MiB/s [2024-11-27T11:09:31.110Z] 208544.00 IOPS, 814.62 MiB/s 00:14:02.227 Latency(us) 00:14:02.227 [2024-11-27T11:09:31.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.227 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:02.227 null0 : 5.00 208499.24 814.45 0.00 0.00 304.83 107.91 1531.27 00:14:02.227 [2024-11-27T11:09:31.110Z] =================================================================================================================== 00:14:02.227 [2024-11-27T11:09:31.110Z] Total : 208499.24 814.45 0.00 0.00 304.83 107.91 1531.27 00:14:02.227 11:09:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:02.227 11:09:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:02.227 11:09:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:02.227 11:09:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:02.227 11:09:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:02.227 11:09:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 { 00:14:02.227 "subsystems": [ 00:14:02.227 { 00:14:02.227 "subsystem": "bdev", 00:14:02.227 "config": [ 00:14:02.227 { 00:14:02.227 "params": { 00:14:02.227 "io_mechanism": "io_uring", 00:14:02.227 "filename": "/dev/nullb0", 00:14:02.227 "name": "null0" 00:14:02.227 }, 00:14:02.227 "method": "bdev_xnvme_create" 00:14:02.227 }, 00:14:02.227 { 00:14:02.227 "method": "bdev_wait_for_examine" 00:14:02.227 } 00:14:02.227 ] 00:14:02.227 } 00:14:02.227 ] 00:14:02.227 } 00:14:02.227 [2024-11-27 11:09:30.977810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:02.227 [2024-11-27 11:09:30.977934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:14:02.486 [2024-11-27 11:09:31.125832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.486 [2024-11-27 11:09:31.168033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.486 Running I/O for 5 seconds... 00:14:04.384 238464.00 IOPS, 931.50 MiB/s [2024-11-27T11:09:34.642Z] 238400.00 IOPS, 931.25 MiB/s [2024-11-27T11:09:35.577Z] 237738.67 IOPS, 928.67 MiB/s [2024-11-27T11:09:36.512Z] 237984.00 IOPS, 929.62 MiB/s [2024-11-27T11:09:36.512Z] 238144.00 IOPS, 930.25 MiB/s 00:14:07.629 Latency(us) 00:14:07.629 [2024-11-27T11:09:36.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.629 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:07.629 null0 : 5.00 238076.85 929.99 0.00 0.00 266.57 245.76 1480.86 00:14:07.629 [2024-11-27T11:09:36.512Z] =================================================================================================================== 00:14:07.629 [2024-11-27T11:09:36.512Z] Total : 238076.85 929.99 0.00 0.00 266.57 245.76 1480.86 00:14:07.629 11:09:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:07.629 11:09:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:07.629 00:14:07.629 real 0m11.024s 00:14:07.629 user 0m8.693s 00:14:07.629 sys 0m2.098s 00:14:07.629 ************************************ 00:14:07.629 END TEST xnvme_bdevperf 00:14:07.629 11:09:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.629 11:09:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.629 ************************************ 00:14:07.629 00:14:07.629 real 0m28.406s 00:14:07.629 user 0m22.984s 00:14:07.629 sys 0m4.658s 00:14:07.629 11:09:36 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.629 11:09:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.629 ************************************ 00:14:07.629 END TEST nvme_xnvme 00:14:07.629 ************************************ 00:14:07.629 11:09:36 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:07.629 11:09:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:07.629 11:09:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.629 11:09:36 -- common/autotest_common.sh@10 -- # set +x 00:14:07.629 ************************************ 00:14:07.629 START TEST blockdev_xnvme 00:14:07.629 ************************************ 00:14:07.629 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:07.890 * Looking for test storage... 00:14:07.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.890 11:09:36 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.890 --rc genhtml_branch_coverage=1 00:14:07.890 --rc genhtml_function_coverage=1 00:14:07.890 --rc genhtml_legend=1 00:14:07.890 --rc geninfo_all_blocks=1 00:14:07.890 --rc geninfo_unexecuted_blocks=1 00:14:07.890 00:14:07.890 ' 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.890 --rc genhtml_branch_coverage=1 00:14:07.890 --rc genhtml_function_coverage=1 00:14:07.890 --rc genhtml_legend=1 00:14:07.890 --rc geninfo_all_blocks=1 00:14:07.890 --rc geninfo_unexecuted_blocks=1 00:14:07.890 00:14:07.890 ' 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.890 --rc genhtml_branch_coverage=1 00:14:07.890 --rc genhtml_function_coverage=1 00:14:07.890 --rc genhtml_legend=1 00:14:07.890 --rc geninfo_all_blocks=1 00:14:07.890 --rc geninfo_unexecuted_blocks=1 00:14:07.890 00:14:07.890 ' 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:07.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.890 --rc genhtml_branch_coverage=1 00:14:07.890 --rc genhtml_function_coverage=1 00:14:07.890 --rc genhtml_legend=1 00:14:07.890 --rc geninfo_all_blocks=1 00:14:07.890 --rc geninfo_unexecuted_blocks=1 00:14:07.890 00:14:07.890 ' 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81243 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:07.890 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81243 00:14:07.890 11:09:36 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81243 ']' 00:14:07.891 11:09:36 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.891 11:09:36 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.891 11:09:36 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.891 11:09:36 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.891 11:09:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.891 11:09:36 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:07.891 [2024-11-27 11:09:36.737200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:07.891 [2024-11-27 11:09:36.737360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81243 ] 00:14:08.150 [2024-11-27 11:09:36.885670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.150 [2024-11-27 11:09:36.928072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.716 11:09:37 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.716 11:09:37 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:08.716 11:09:37 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:08.716 11:09:37 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:08.716 11:09:37 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:08.716 11:09:37 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:08.716 11:09:37 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:08.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.233 Waiting for block devices as requested 00:14:09.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.491 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.491 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.762 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:14.762 nvme0n1 00:14:14.762 nvme1n1 00:14:14.762 nvme2n1 00:14:14.762 nvme2n2 00:14:14.762 nvme2n3 00:14:14.762 nvme3n1 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 11:09:43 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.762 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:14.763 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ba82dcf9-9d1b-4998-8916-3f73c386d0c7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ba82dcf9-9d1b-4998-8916-3f73c386d0c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "11cf37d4-c1d0-49e7-a5d9-327cedb13df7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "11cf37d4-c1d0-49e7-a5d9-327cedb13df7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "03bfcff2-0e1f-450b-9a08-9c384d8184ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "03bfcff2-0e1f-450b-9a08-9c384d8184ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6a0696fb-d532-4f5b-a025-4244d7d22f02"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a0696fb-d532-4f5b-a025-4244d7d22f02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ed88d0e3-f173-49ef-8677-90cc778cc160"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed88d0e3-f173-49ef-8677-90cc778cc160",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "06181051-e9cb-4b3e-8bd3-22d88ebbdb55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "06181051-e9cb-4b3e-8bd3-22d88ebbdb55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:14.763 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:14.763 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:14.763 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:14.763 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:14.763 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81243 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81243 ']' 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81243 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81243 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.763 killing process with pid 81243 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81243' 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81243 00:14:14.763 11:09:43 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81243 00:14:15.023 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:15.023 11:09:43 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:15.023 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:15.023 11:09:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.023 11:09:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.023 ************************************ 00:14:15.023 START TEST bdev_hello_world 00:14:15.023 ************************************ 00:14:15.023 11:09:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:15.023 [2024-11-27 11:09:43.802355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:15.023 [2024-11-27 11:09:43.802479] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81585 ] 00:14:15.285 [2024-11-27 11:09:43.951736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.285 [2024-11-27 11:09:43.995105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.285 [2024-11-27 11:09:44.159190] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:15.285 [2024-11-27 11:09:44.159235] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:15.285 [2024-11-27 11:09:44.159253] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:15.285 [2024-11-27 11:09:44.161210] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:15.285 [2024-11-27 11:09:44.161781] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:15.285 [2024-11-27 11:09:44.161811] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:15.285 [2024-11-27 11:09:44.162859] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:15.285 00:14:15.285 [2024-11-27 11:09:44.162908] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:15.547 00:14:15.547 real 0m0.581s 00:14:15.547 user 0m0.301s 00:14:15.547 sys 0m0.171s 00:14:15.547 11:09:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.547 11:09:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:15.547 ************************************ 00:14:15.547 END TEST bdev_hello_world 00:14:15.547 ************************************ 00:14:15.547 11:09:44 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:15.547 11:09:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.547 11:09:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.547 11:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.547 ************************************ 00:14:15.547 START TEST bdev_bounds 00:14:15.547 ************************************ 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81616 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:15.547 Process bdevio pid: 81616 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81616' 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81616 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81616 ']' 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:15.547 11:09:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:15.809 [2024-11-27 11:09:44.449315] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:15.809 [2024-11-27 11:09:44.449465] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81616 ] 00:14:15.809 [2024-11-27 11:09:44.603107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.809 [2024-11-27 11:09:44.655689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.809 [2024-11-27 11:09:44.656068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.809 [2024-11-27 11:09:44.656162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.753 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.753 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:16.753 11:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:16.753 I/O targets: 00:14:16.753 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:16.753 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:16.753 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:16.753 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:16.753 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:16.753 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:16.753 00:14:16.753 00:14:16.753 CUnit - A unit testing framework for C - Version 2.1-3 00:14:16.753 http://cunit.sourceforge.net/ 00:14:16.753 00:14:16.753 00:14:16.753 Suite: bdevio tests on: nvme3n1 00:14:16.753 Test: blockdev write read block ...passed 00:14:16.753 Test: blockdev write zeroes read block ...passed 00:14:16.753 Test: blockdev write zeroes read no split ...passed 00:14:16.753 Test: blockdev write zeroes read split ...passed 00:14:16.753 Test: blockdev write zeroes read split partial ...passed 00:14:16.753 Test: blockdev reset ...passed 00:14:16.753 Test: blockdev write read 8 blocks ...passed 00:14:16.753 Test: blockdev write read size > 128k ...passed 00:14:16.753 Test: blockdev write read invalid size ...passed 00:14:16.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.753 Test: blockdev write read max offset ...passed 00:14:16.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.753 Test: blockdev writev readv 8 blocks ...passed 00:14:16.753 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.753 Test: blockdev writev readv block ...passed 00:14:16.753 Test: blockdev writev readv size > 128k ...passed 00:14:16.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.753 Test: blockdev comparev and writev ...passed 00:14:16.753 Test: blockdev nvme passthru rw ...passed 00:14:16.753 Test: blockdev nvme passthru vendor specific ...passed 00:14:16.753 Test: blockdev nvme admin passthru ...passed 00:14:16.753 Test: blockdev copy ...passed 00:14:16.753 Suite: bdevio tests on: nvme2n3 00:14:16.753 Test: blockdev write read block ...passed 00:14:16.753 Test: blockdev write zeroes read block ...passed 00:14:16.753 Test: blockdev write zeroes read no split ...passed 00:14:16.753 Test: blockdev write zeroes read split ...passed 00:14:16.753 Test: blockdev write zeroes read split partial ...passed 00:14:16.753 Test: blockdev reset ...passed 00:14:16.753 Test: blockdev write read 8 blocks ...passed 00:14:16.753 Test: blockdev write read size > 128k ...passed 00:14:16.753 Test: blockdev write read invalid size ...passed 00:14:16.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.753 Test: blockdev write read max offset ...passed 00:14:16.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.753 Test: blockdev writev readv 8 blocks ...passed 00:14:16.753 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.753 Test: blockdev writev readv block ...passed 00:14:16.753 Test: blockdev writev readv size > 128k ...passed 00:14:16.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.753 Test: blockdev comparev and writev ...passed 00:14:16.753 Test: blockdev nvme passthru rw ...passed 00:14:16.753 Test: blockdev nvme passthru vendor specific ...passed 00:14:16.753 Test: blockdev nvme admin passthru ...passed 00:14:16.753 Test: blockdev copy ...passed 00:14:16.753 Suite: bdevio tests on: nvme2n2 00:14:16.753 Test: blockdev write read block ...passed 00:14:16.753 Test: blockdev write zeroes read block ...passed 00:14:16.753 Test: blockdev write zeroes read no split ...passed 00:14:16.753 Test: blockdev write zeroes read split ...passed 00:14:16.753 Test: blockdev write zeroes read split partial ...passed 00:14:16.753 Test: blockdev reset ...passed 00:14:16.753 Test: blockdev write read 8 blocks ...passed 00:14:16.753 Test: blockdev write read size > 128k ...passed 00:14:16.753 Test: blockdev write read invalid size ...passed 00:14:16.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.753 Test: blockdev write read max offset ...passed 00:14:16.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.753 Test: blockdev writev readv 8 blocks ...passed 00:14:16.753 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.753 Test: blockdev writev readv block ...passed 00:14:16.753 Test: blockdev writev readv size > 128k ...passed 00:14:16.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.753 Test: blockdev comparev and writev ...passed 00:14:16.753 Test: blockdev nvme passthru rw ...passed 00:14:16.754 Test: blockdev nvme passthru vendor specific ...passed 00:14:16.754 Test: blockdev nvme admin passthru ...passed 00:14:16.754 Test: blockdev copy ...passed 00:14:16.754 Suite: bdevio tests on: nvme2n1 00:14:16.754 Test: blockdev write read block ...passed 00:14:16.754 Test: blockdev write zeroes read block ...passed 00:14:16.754 Test: blockdev write zeroes read no split ...passed 00:14:16.754 Test: blockdev write zeroes read split ...passed 00:14:16.754 Test: blockdev write zeroes read split partial ...passed 00:14:16.754 Test: blockdev reset ...passed 00:14:16.754 Test: blockdev write read 8 blocks ...passed 00:14:16.754 Test: blockdev write read size > 128k ...passed 00:14:16.754 Test: blockdev write read invalid size ...passed 00:14:16.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.754 Test: blockdev write read max offset ...passed 00:14:16.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.754 Test: blockdev writev readv 8 blocks ...passed 00:14:16.754 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.754 Test: blockdev writev readv block ...passed 00:14:16.754 Test: blockdev writev readv size > 128k ...passed 00:14:16.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.754 Test: blockdev comparev and writev ...passed 00:14:16.754 Test: blockdev nvme passthru rw ...passed 00:14:16.754 Test: blockdev nvme passthru vendor specific ...passed 00:14:16.754 Test: blockdev nvme admin passthru ...passed 00:14:16.754 Test: blockdev copy ...passed 00:14:16.754 Suite: bdevio tests on: nvme1n1 00:14:16.754 Test: blockdev write read block ...passed 00:14:16.754 Test: blockdev write zeroes read block ...passed 00:14:16.754 Test: blockdev write zeroes read no split ...passed 00:14:16.754 Test: blockdev write zeroes read split ...passed 00:14:16.754 Test: blockdev write zeroes read split partial ...passed 00:14:16.754 Test: blockdev reset ...passed 00:14:16.754 Test: blockdev write read 8 blocks ...passed 00:14:16.754 Test: blockdev write read size > 128k ...passed 00:14:16.754 Test: blockdev write read invalid size ...passed 00:14:16.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.754 Test: blockdev write read max offset ...passed 00:14:16.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.754 Test: blockdev writev readv 8 blocks ...passed 00:14:17.015 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.015 Test: blockdev writev readv block ...passed 00:14:17.015 Test: blockdev writev readv size > 128k ...passed 00:14:17.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.015 Test: blockdev comparev and writev ...passed 00:14:17.015 Test: blockdev nvme passthru rw ...passed 00:14:17.015 Test: blockdev nvme passthru vendor specific ...passed 00:14:17.015 Test: blockdev nvme admin passthru ...passed 00:14:17.015 Test: blockdev copy ...passed 00:14:17.015 Suite: bdevio tests on: nvme0n1 00:14:17.015 Test: blockdev write read block ...passed 00:14:17.015 Test: blockdev write zeroes read block ...passed 00:14:17.015 Test: blockdev write zeroes read no split ...passed 00:14:17.015 Test: blockdev write zeroes read split ...passed 00:14:17.015 Test: blockdev write zeroes read split partial ...passed 00:14:17.015 Test: blockdev reset ...passed 00:14:17.015 Test: blockdev write read 8 blocks ...passed 00:14:17.015 Test: blockdev write read size > 128k ...passed 00:14:17.015 Test: blockdev write read invalid size ...passed 00:14:17.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:17.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:17.015 Test: blockdev write read max offset ...passed 00:14:17.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:17.015 Test: blockdev writev readv 8 blocks ...passed 00:14:17.015 Test: blockdev writev readv 30 x 1block ...passed 00:14:17.015 Test: blockdev writev readv block ...passed 00:14:17.015 Test: blockdev writev readv size > 128k ...passed 00:14:17.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:17.015 Test: blockdev comparev and writev ...passed 00:14:17.015 Test: blockdev nvme passthru rw ...passed 00:14:17.015 Test: blockdev nvme passthru vendor specific ...passed 00:14:17.015 Test: blockdev nvme admin passthru ...passed 00:14:17.015 Test: blockdev copy ...passed 00:14:17.015 00:14:17.015 Run Summary: Type Total Ran Passed Failed Inactive 00:14:17.015 suites 6 6 n/a 0 0 00:14:17.015 tests 138 138 138 0 0 00:14:17.015 asserts 780 780 780 0 n/a 00:14:17.015 00:14:17.015 Elapsed time = 0.597 seconds 00:14:17.015 0 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81616 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81616 ']' 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81616 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81616 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.015 killing process with pid 81616 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81616' 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81616 00:14:17.015 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81616 00:14:17.276 11:09:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:17.276 00:14:17.276 real 0m1.594s 00:14:17.276 user 0m3.696s 00:14:17.276 sys 0m0.389s 00:14:17.276 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.276 11:09:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:17.276 ************************************ 00:14:17.276 END TEST bdev_bounds 00:14:17.276 ************************************ 00:14:17.277 11:09:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:17.277 11:09:46 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:17.277 11:09:46 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.277 11:09:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.277 ************************************ 00:14:17.277 START TEST bdev_nbd 00:14:17.277 ************************************ 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81660 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81660 /var/tmp/spdk-nbd.sock 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81660 ']' 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:17.277 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:17.277 [2024-11-27 11:09:46.103241] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:17.277 [2024-11-27 11:09:46.103362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.537 [2024-11-27 11:09:46.250965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.537 [2024-11-27 11:09:46.303223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:18.110 11:09:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.372 1+0 records in 00:14:18.372 1+0 records out 00:14:18.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103743 s, 3.9 MB/s 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:18.372 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.634 1+0 records in 00:14:18.634 1+0 records out 00:14:18.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518056 s, 7.9 MB/s 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:18.634 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.895 1+0 records in 00:14:18.895 1+0 records out 00:14:18.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102041 s, 4.0 MB/s 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:18.895 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.155 1+0 records in 00:14:19.155 1+0 records out 00:14:19.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708392 s, 5.8 MB/s 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.155 11:09:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.416 1+0 records in 00:14:19.416 1+0 records out 00:14:19.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411387 s, 10.0 MB/s 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.416 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.678 1+0 records in 00:14:19.678 1+0 records out 00:14:19.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126786 s, 3.2 MB/s 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.678 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd0", 00:14:19.940 "bdev_name": "nvme0n1" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd1", 00:14:19.940 "bdev_name": "nvme1n1" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd2", 00:14:19.940 "bdev_name": "nvme2n1" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd3", 00:14:19.940 "bdev_name": "nvme2n2" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd4", 00:14:19.940 "bdev_name": "nvme2n3" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd5", 00:14:19.940 "bdev_name": "nvme3n1" 00:14:19.940 } 00:14:19.940 ]' 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd0", 00:14:19.940 "bdev_name": "nvme0n1" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd1", 00:14:19.940 "bdev_name": "nvme1n1" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd2", 00:14:19.940 "bdev_name": "nvme2n1" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd3", 00:14:19.940 "bdev_name": "nvme2n2" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd4", 00:14:19.940 "bdev_name": "nvme2n3" 00:14:19.940 }, 00:14:19.940 { 00:14:19.940 "nbd_device": "/dev/nbd5", 00:14:19.940 "bdev_name": "nvme3n1" 00:14:19.940 } 00:14:19.940 ]' 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.940 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.202 11:09:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.463 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.724 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.985 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.271 11:09:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.271 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:21.533 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:21.795 /dev/nbd0 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.796 1+0 records in 00:14:21.796 1+0 records out 00:14:21.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0017571 s, 2.3 MB/s 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:21.796 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:22.057 /dev/nbd1 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.057 1+0 records in 00:14:22.057 1+0 records out 00:14:22.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133169 s, 3.1 MB/s 00:14:22.057 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.058 11:09:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:22.318 /dev/nbd10 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.318 1+0 records in 00:14:22.318 1+0 records out 00:14:22.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104799 s, 3.9 MB/s 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.318 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:22.319 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.319 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.319 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:22.580 /dev/nbd11 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.580 1+0 records in 00:14:22.580 1+0 records out 00:14:22.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975144 s, 4.2 MB/s 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.580 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:22.841 /dev/nbd12 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.841 1+0 records in 00:14:22.841 1+0 records out 00:14:22.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161909 s, 2.5 MB/s 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.841 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:23.104 /dev/nbd13 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.104 1+0 records in 00:14:23.104 1+0 records out 00:14:23.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104818 s, 3.9 MB/s 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.104 11:09:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd0", 00:14:23.366 "bdev_name": "nvme0n1" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd1", 00:14:23.366 "bdev_name": "nvme1n1" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd10", 00:14:23.366 "bdev_name": "nvme2n1" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd11", 00:14:23.366 "bdev_name": "nvme2n2" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd12", 00:14:23.366 "bdev_name": "nvme2n3" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd13", 00:14:23.366 "bdev_name": "nvme3n1" 00:14:23.366 } 00:14:23.366 ]' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd0", 00:14:23.366 "bdev_name": "nvme0n1" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd1", 00:14:23.366 "bdev_name": "nvme1n1" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd10", 00:14:23.366 "bdev_name": "nvme2n1" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd11", 00:14:23.366 "bdev_name": "nvme2n2" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd12", 00:14:23.366 "bdev_name": "nvme2n3" 00:14:23.366 }, 00:14:23.366 { 00:14:23.366 "nbd_device": "/dev/nbd13", 00:14:23.366 "bdev_name": "nvme3n1" 00:14:23.366 } 00:14:23.366 ]' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:23.366 /dev/nbd1 00:14:23.366 /dev/nbd10 00:14:23.366 /dev/nbd11 00:14:23.366 /dev/nbd12 00:14:23.366 /dev/nbd13' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:23.366 /dev/nbd1 00:14:23.366 /dev/nbd10 00:14:23.366 /dev/nbd11 00:14:23.366 /dev/nbd12 00:14:23.366 /dev/nbd13' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:23.366 256+0 records in 00:14:23.366 256+0 records out 00:14:23.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103577 s, 101 MB/s 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:23.366 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:23.628 256+0 records in 00:14:23.628 256+0 records out 00:14:23.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242246 s, 4.3 MB/s 00:14:23.628 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:23.628 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:23.891 256+0 records in 00:14:23.891 256+0 records out 00:14:23.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.317209 s, 3.3 MB/s 00:14:23.891 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:23.891 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:24.152 256+0 records in 00:14:24.152 256+0 records out 00:14:24.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200396 s, 5.2 MB/s 00:14:24.152 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:24.152 11:09:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:24.414 256+0 records in 00:14:24.414 256+0 records out 00:14:24.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231861 s, 4.5 MB/s 00:14:24.414 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:24.414 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:24.676 256+0 records in 00:14:24.676 256+0 records out 00:14:24.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238381 s, 4.4 MB/s 00:14:24.676 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:24.676 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:24.938 256+0 records in 00:14:24.938 256+0 records out 00:14:24.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251907 s, 4.2 MB/s 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.938 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.200 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.201 11:09:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.462 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.724 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:25.985 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.247 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.248 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.248 11:09:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.248 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:26.509 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:26.770 malloc_lvol_verify 00:14:26.770 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:27.033 1a4a6ee2-765f-4421-9ecf-c018e74fe3ca 00:14:27.033 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:27.294 76f1087f-26be-4f85-b496-0e543b055f36 00:14:27.294 11:09:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:27.294 /dev/nbd0 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:27.294 mke2fs 1.47.0 (5-Feb-2023) 00:14:27.294 Discarding device blocks: 0/4096 done 00:14:27.294 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:27.294 00:14:27.294 Allocating group tables: 0/1 done 00:14:27.294 Writing inode tables: 0/1 done 00:14:27.294 Creating journal (1024 blocks): done 00:14:27.294 Writing superblocks and filesystem accounting information: 0/1 done 00:14:27.294 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.294 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81660 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81660 ']' 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81660 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81660 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.556 killing process with pid 81660 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81660' 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81660 00:14:27.556 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81660 00:14:27.818 11:09:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:27.818 00:14:27.818 real 0m10.556s 00:14:27.818 user 0m14.243s 00:14:27.818 sys 0m3.917s 00:14:27.818 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.818 ************************************ 00:14:27.818 END TEST bdev_nbd 00:14:27.818 ************************************ 00:14:27.818 11:09:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:27.818 11:09:56 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:27.818 11:09:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:27.818 11:09:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:27.818 11:09:56 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:27.818 11:09:56 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:27.818 11:09:56 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.818 11:09:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.818 ************************************ 00:14:27.818 START TEST bdev_fio 00:14:27.818 ************************************ 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:27.818 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:27.818 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:28.081 ************************************ 00:14:28.081 START TEST bdev_fio_rw_verify 00:14:28.081 ************************************ 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:28.081 11:09:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:28.081 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:28.081 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:28.081 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:28.081 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:28.081 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:28.081 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:28.081 fio-3.35 00:14:28.081 Starting 6 threads 00:14:40.449 00:14:40.450 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82067: Wed Nov 27 11:10:07 2024 00:14:40.450 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(424MiB/10003msec) 00:14:40.450 slat (usec): min=2, max=1745, avg= 6.87, stdev=13.51 00:14:40.450 clat (usec): min=104, max=8620, avg=1857.28, stdev=907.68 00:14:40.450 lat (usec): min=107, max=8629, avg=1864.15, stdev=908.31 00:14:40.450 clat percentiles (usec): 00:14:40.450 | 50.000th=[ 1729], 99.000th=[ 4621], 99.900th=[ 6325], 99.990th=[ 7963], 00:14:40.450 | 99.999th=[ 8586] 00:14:40.450 write: IOPS=11.1k, BW=43.6MiB/s (45.7MB/s)(436MiB/10003msec); 0 zone resets 00:14:40.450 slat (usec): min=10, max=5120, avg=47.37, stdev=183.30 00:14:40.450 clat (usec): min=123, max=11235, avg=2141.15, stdev=1091.96 00:14:40.450 lat (usec): min=136, max=11277, avg=2188.52, stdev=1106.57 00:14:40.450 clat percentiles (usec): 00:14:40.450 | 50.000th=[ 1942], 99.000th=[ 5669], 99.900th=[ 7570], 99.990th=[ 9110], 00:14:40.450 | 99.999th=[10945] 00:14:40.450 bw ( KiB/s): min=33032, max=51336, per=100.00%, avg=44779.58, stdev=940.80, samples=114 00:14:40.450 iops : min= 8258, max=12834, avg=11193.95, stdev=235.16, samples=114 00:14:40.450 lat (usec) : 250=0.25%, 500=1.79%, 750=4.69%, 1000=6.89% 00:14:40.450 lat (msec) : 2=43.81%, 4=38.15%, 10=4.43%, 20=0.01% 00:14:40.450 cpu : usr=45.69%, sys=31.60%, ctx=4176, majf=0, minf=12276 00:14:40.450 IO depths : 1=11.1%, 2=23.3%, 4=51.5%, 8=14.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.450 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.450 issued rwts: total=108575,111537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:40.450 00:14:40.450 Run status group 0 (all jobs): 00:14:40.450 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=424MiB (445MB), run=10003-10003msec 00:14:40.450 WRITE: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10003-10003msec 00:14:40.450 ----------------------------------------------------- 00:14:40.450 Suppressions used: 00:14:40.450 count bytes template 00:14:40.450 6 48 /usr/src/fio/parse.c 00:14:40.450 2887 277152 /usr/src/fio/iolog.c 00:14:40.450 1 8 libtcmalloc_minimal.so 00:14:40.450 1 904 libcrypto.so 00:14:40.450 ----------------------------------------------------- 00:14:40.450 00:14:40.450 00:14:40.450 real 0m11.188s 00:14:40.450 user 0m28.203s 00:14:40.450 sys 0m19.280s 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.450 ************************************ 00:14:40.450 END TEST bdev_fio_rw_verify 00:14:40.450 ************************************ 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:40.450 11:10:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ba82dcf9-9d1b-4998-8916-3f73c386d0c7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ba82dcf9-9d1b-4998-8916-3f73c386d0c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "11cf37d4-c1d0-49e7-a5d9-327cedb13df7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "11cf37d4-c1d0-49e7-a5d9-327cedb13df7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "03bfcff2-0e1f-450b-9a08-9c384d8184ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "03bfcff2-0e1f-450b-9a08-9c384d8184ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6a0696fb-d532-4f5b-a025-4244d7d22f02"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a0696fb-d532-4f5b-a025-4244d7d22f02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ed88d0e3-f173-49ef-8677-90cc778cc160"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed88d0e3-f173-49ef-8677-90cc778cc160",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "06181051-e9cb-4b3e-8bd3-22d88ebbdb55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "06181051-e9cb-4b3e-8bd3-22d88ebbdb55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:40.450 /home/vagrant/spdk_repo/spdk 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:40.450 00:14:40.450 real 0m11.367s 00:14:40.450 user 0m28.281s 00:14:40.450 sys 0m19.358s 00:14:40.450 ************************************ 00:14:40.450 END TEST bdev_fio 00:14:40.450 ************************************ 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.450 11:10:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:40.450 11:10:08 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:40.450 11:10:08 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:40.450 11:10:08 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:40.450 11:10:08 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.450 11:10:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.450 ************************************ 00:14:40.450 START TEST bdev_verify 00:14:40.450 ************************************ 00:14:40.451 11:10:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:40.451 [2024-11-27 11:10:08.154836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:40.451 [2024-11-27 11:10:08.154991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82239 ] 00:14:40.451 [2024-11-27 11:10:08.307128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.451 [2024-11-27 11:10:08.360515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.451 [2024-11-27 11:10:08.360583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.451 Running I/O for 5 seconds... 00:14:41.967 21824.00 IOPS, 85.25 MiB/s [2024-11-27T11:10:11.794Z] 22336.00 IOPS, 87.25 MiB/s [2024-11-27T11:10:12.739Z] 22624.00 IOPS, 88.37 MiB/s [2024-11-27T11:10:13.682Z] 22464.00 IOPS, 87.75 MiB/s [2024-11-27T11:10:13.942Z] 23148.80 IOPS, 90.42 MiB/s 00:14:45.059 Latency(us) 00:14:45.059 [2024-11-27T11:10:13.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.059 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:45.059 Verification LBA range: start 0x0 length 0xa0000 00:14:45.059 nvme0n1 : 5.02 1785.27 6.97 0.00 0.00 71553.95 9175.04 76223.41 00:14:45.059 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:45.059 Verification LBA range: start 0xa0000 length 0xa0000 00:14:45.059 nvme0n1 : 5.05 1823.50 7.12 0.00 0.00 70058.63 9628.75 74610.22 00:14:45.060 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x0 length 0xbd0bd 00:14:45.060 nvme1n1 : 5.03 2321.95 9.07 0.00 0.00 54879.34 4486.70 74610.22 00:14:45.060 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:45.060 nvme1n1 : 5.05 2383.95 9.31 0.00 0.00 53391.24 5242.88 64527.75 00:14:45.060 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x0 length 0x80000 00:14:45.060 nvme2n1 : 5.05 1851.34 7.23 0.00 0.00 68518.60 7108.14 66544.25 00:14:45.060 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x80000 length 0x80000 00:14:45.060 nvme2n1 : 5.04 1878.94 7.34 0.00 0.00 67765.11 8166.79 73400.32 00:14:45.060 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x0 length 0x80000 00:14:45.060 nvme2n2 : 5.07 1793.50 7.01 0.00 0.00 70546.26 8570.09 72593.72 00:14:45.060 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x80000 length 0x80000 00:14:45.060 nvme2n2 : 5.05 1824.39 7.13 0.00 0.00 69428.20 9779.99 66140.95 00:14:45.060 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x0 length 0x80000 00:14:45.060 nvme2n3 : 5.07 1791.16 7.00 0.00 0.00 70496.94 4990.82 72190.42 00:14:45.060 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x80000 length 0x80000 00:14:45.060 nvme2n3 : 5.07 1819.31 7.11 0.00 0.00 69471.73 12048.54 64931.05 00:14:45.060 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x0 length 0x20000 00:14:45.060 nvme3n1 : 5.08 1815.86 7.09 0.00 0.00 69403.87 5394.12 65334.35 00:14:45.060 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:45.060 Verification LBA range: start 0x20000 length 0x20000 00:14:45.060 nvme3n1 : 5.07 1842.22 7.20 0.00 0.00 68476.98 1966.08 62511.26 00:14:45.060 [2024-11-27T11:10:13.943Z] =================================================================================================================== 00:14:45.060 [2024-11-27T11:10:13.943Z] Total : 22931.39 89.58 0.00 0.00 66397.96 1966.08 76223.41 00:14:45.321 00:14:45.321 real 0m5.862s 00:14:45.321 user 0m9.258s 00:14:45.321 sys 0m1.468s 00:14:45.321 11:10:13 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.321 ************************************ 00:14:45.321 END TEST bdev_verify 00:14:45.321 ************************************ 00:14:45.321 11:10:13 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 11:10:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:45.321 11:10:13 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:45.321 11:10:13 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.321 11:10:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 ************************************ 00:14:45.321 START TEST bdev_verify_big_io 00:14:45.321 ************************************ 00:14:45.321 11:10:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:45.321 [2024-11-27 11:10:14.087377] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:45.321 [2024-11-27 11:10:14.087521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82324 ] 00:14:45.582 [2024-11-27 11:10:14.241711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.582 [2024-11-27 11:10:14.291499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.582 [2024-11-27 11:10:14.291602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.843 Running I/O for 5 seconds... 00:14:51.960 1792.00 IOPS, 112.00 MiB/s [2024-11-27T11:10:20.843Z] 3952.50 IOPS, 247.03 MiB/s 00:14:51.960 Latency(us) 00:14:51.960 [2024-11-27T11:10:20.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.960 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.960 Verification LBA range: start 0x0 length 0xa000 00:14:51.960 nvme0n1 : 5.75 133.57 8.35 0.00 0.00 911076.96 130668.70 974369.08 00:14:51.960 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.960 Verification LBA range: start 0xa000 length 0xa000 00:14:51.960 nvme0n1 : 6.00 85.32 5.33 0.00 0.00 1422814.92 362968.62 1387346.71 00:14:51.960 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.960 Verification LBA range: start 0x0 length 0xbd0b 00:14:51.960 nvme1n1 : 5.75 133.53 8.35 0.00 0.00 883042.49 12250.19 851766.35 00:14:51.960 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.960 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:51.960 nvme1n1 : 6.00 103.95 6.50 0.00 0.00 1167913.24 30045.74 1245385.65 00:14:51.960 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.960 Verification LBA range: start 0x0 length 0x8000 00:14:51.960 nvme2n1 : 5.82 140.20 8.76 0.00 0.00 822768.52 49807.36 838860.80 00:14:51.960 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.960 Verification LBA range: start 0x8000 length 0x8000 00:14:51.961 nvme2n1 : 5.97 85.72 5.36 0.00 0.00 1357486.87 154060.01 1038896.84 00:14:51.961 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.961 Verification LBA range: start 0x0 length 0x8000 00:14:51.961 nvme2n2 : 5.89 152.01 9.50 0.00 0.00 746789.70 72997.02 858219.13 00:14:51.961 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.961 Verification LBA range: start 0x8000 length 0x8000 00:14:51.961 nvme2n2 : 6.00 77.11 4.82 0.00 0.00 1434274.80 75013.51 3510309.81 00:14:51.961 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.961 Verification LBA range: start 0x0 length 0x8000 00:14:51.961 nvme2n3 : 5.90 162.67 10.17 0.00 0.00 681312.26 78643.20 942105.21 00:14:51.961 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.961 Verification LBA range: start 0x8000 length 0x8000 00:14:51.961 nvme2n3 : 6.01 106.46 6.65 0.00 0.00 995394.15 4612.73 1806777.11 00:14:51.961 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:51.961 Verification LBA range: start 0x0 length 0x2000 00:14:51.961 nvme3n1 : 5.91 205.90 12.87 0.00 0.00 525855.02 6200.71 903388.55 00:14:51.961 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:51.961 Verification LBA range: start 0x2000 length 0x2000 00:14:51.961 nvme3n1 : 6.15 153.88 9.62 0.00 0.00 660791.91 365.49 3716798.62 00:14:51.961 [2024-11-27T11:10:20.844Z] =================================================================================================================== 00:14:51.961 [2024-11-27T11:10:20.844Z] Total : 1540.34 96.27 0.00 0.00 887008.98 365.49 3716798.62 00:14:52.222 00:14:52.222 real 0m6.903s 00:14:52.222 user 0m12.694s 00:14:52.222 sys 0m0.424s 00:14:52.222 11:10:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.222 ************************************ 00:14:52.222 END TEST bdev_verify_big_io 00:14:52.222 ************************************ 00:14:52.222 11:10:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.222 11:10:20 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:52.222 11:10:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:52.222 11:10:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.222 11:10:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.222 ************************************ 00:14:52.222 START TEST bdev_write_zeroes 00:14:52.222 ************************************ 00:14:52.222 11:10:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:52.222 [2024-11-27 11:10:21.042124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:52.222 [2024-11-27 11:10:21.042229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82424 ] 00:14:52.483 [2024-11-27 11:10:21.187480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.483 [2024-11-27 11:10:21.229315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.772 Running I/O for 1 seconds... 00:14:53.717 79584.00 IOPS, 310.88 MiB/s 00:14:53.717 Latency(us) 00:14:53.717 [2024-11-27T11:10:22.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.717 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:53.717 nvme0n1 : 1.01 12868.96 50.27 0.00 0.00 9936.47 4713.55 22988.01 00:14:53.717 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:53.717 nvme1n1 : 1.02 14834.23 57.95 0.00 0.00 8613.90 4335.46 18955.03 00:14:53.717 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:53.717 nvme2n1 : 1.03 12852.27 50.20 0.00 0.00 9884.29 5091.64 22181.42 00:14:53.717 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:53.717 nvme2n2 : 1.02 12786.39 49.95 0.00 0.00 9929.06 4889.99 22080.59 00:14:53.717 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:53.717 nvme2n3 : 1.02 12771.66 49.89 0.00 0.00 9932.67 4713.55 22282.24 00:14:53.717 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:53.717 nvme3n1 : 1.02 12757.15 49.83 0.00 0.00 9938.43 4511.90 22584.71 00:14:53.717 [2024-11-27T11:10:22.600Z] =================================================================================================================== 00:14:53.717 [2024-11-27T11:10:22.600Z] Total : 78870.67 308.09 0.00 0.00 9677.04 4335.46 22988.01 00:14:53.979 00:14:53.979 real 0m1.721s 00:14:53.979 user 0m1.084s 00:14:53.979 sys 0m0.470s 00:14:53.979 11:10:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.979 ************************************ 00:14:53.979 11:10:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:53.979 END TEST bdev_write_zeroes 00:14:53.979 ************************************ 00:14:53.979 11:10:22 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:53.979 11:10:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:53.979 11:10:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.979 11:10:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.979 ************************************ 00:14:53.979 START TEST bdev_json_nonenclosed 00:14:53.979 ************************************ 00:14:53.979 11:10:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:53.979 [2024-11-27 11:10:22.838272] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:53.979 [2024-11-27 11:10:22.838418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82469 ] 00:14:54.240 [2024-11-27 11:10:22.992331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.240 [2024-11-27 11:10:23.041605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.240 [2024-11-27 11:10:23.041724] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:54.240 [2024-11-27 11:10:23.041740] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:54.240 [2024-11-27 11:10:23.041753] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:54.502 00:14:54.502 real 0m0.377s 00:14:54.502 user 0m0.147s 00:14:54.502 sys 0m0.125s 00:14:54.502 11:10:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.502 ************************************ 00:14:54.502 END TEST bdev_json_nonenclosed 00:14:54.502 ************************************ 00:14:54.502 11:10:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:54.502 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:54.502 11:10:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:54.502 11:10:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.502 11:10:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.502 ************************************ 00:14:54.502 START TEST bdev_json_nonarray 00:14:54.502 ************************************ 00:14:54.502 11:10:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:54.502 [2024-11-27 11:10:23.281135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:54.502 [2024-11-27 11:10:23.281269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82489 ] 00:14:54.763 [2024-11-27 11:10:23.430683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.763 [2024-11-27 11:10:23.481190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.763 [2024-11-27 11:10:23.481308] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:54.763 [2024-11-27 11:10:23.481325] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:54.763 [2024-11-27 11:10:23.481338] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:54.763 ************************************ 00:14:54.763 END TEST bdev_json_nonarray 00:14:54.763 ************************************ 00:14:54.763 00:14:54.763 real 0m0.370s 00:14:54.763 user 0m0.155s 00:14:54.763 sys 0m0.109s 00:14:54.763 11:10:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.763 11:10:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:54.763 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:54.763 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:54.763 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:54.763 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:54.763 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:54.763 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:55.024 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:55.024 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:55.024 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:55.024 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:55.024 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:55.024 11:10:23 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:55.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:03.424 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.424 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.424 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.424 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.424 00:15:03.424 real 0m55.721s 00:15:03.424 user 1m18.148s 00:15:03.424 sys 0m42.290s 00:15:03.424 ************************************ 00:15:03.424 END TEST blockdev_xnvme 00:15:03.424 ************************************ 00:15:03.424 11:10:32 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:03.424 11:10:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.424 11:10:32 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:03.424 11:10:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:03.424 11:10:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.424 11:10:32 -- common/autotest_common.sh@10 -- # set +x 00:15:03.424 ************************************ 00:15:03.424 START TEST ublk 00:15:03.424 ************************************ 00:15:03.424 11:10:32 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:03.686 * Looking for test storage... 00:15:03.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:03.686 11:10:32 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:03.686 11:10:32 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.687 11:10:32 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.687 11:10:32 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.687 11:10:32 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.687 11:10:32 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.687 11:10:32 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.687 11:10:32 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:03.687 11:10:32 ublk -- scripts/common.sh@345 -- # : 1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.687 11:10:32 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.687 11:10:32 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@353 -- # local d=1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.687 11:10:32 ublk -- scripts/common.sh@355 -- # echo 1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.687 11:10:32 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@353 -- # local d=2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.687 11:10:32 ublk -- scripts/common.sh@355 -- # echo 2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.687 11:10:32 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.687 11:10:32 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.687 11:10:32 ublk -- scripts/common.sh@368 -- # return 0 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:03.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.687 --rc genhtml_branch_coverage=1 00:15:03.687 --rc genhtml_function_coverage=1 00:15:03.687 --rc genhtml_legend=1 00:15:03.687 --rc geninfo_all_blocks=1 00:15:03.687 --rc geninfo_unexecuted_blocks=1 00:15:03.687 00:15:03.687 ' 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:03.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.687 --rc genhtml_branch_coverage=1 00:15:03.687 --rc genhtml_function_coverage=1 00:15:03.687 --rc genhtml_legend=1 00:15:03.687 --rc geninfo_all_blocks=1 00:15:03.687 --rc geninfo_unexecuted_blocks=1 00:15:03.687 00:15:03.687 ' 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:03.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.687 --rc genhtml_branch_coverage=1 00:15:03.687 --rc genhtml_function_coverage=1 00:15:03.687 --rc genhtml_legend=1 00:15:03.687 --rc geninfo_all_blocks=1 00:15:03.687 --rc geninfo_unexecuted_blocks=1 00:15:03.687 00:15:03.687 ' 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:03.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.687 --rc genhtml_branch_coverage=1 00:15:03.687 --rc genhtml_function_coverage=1 00:15:03.687 --rc genhtml_legend=1 00:15:03.687 --rc geninfo_all_blocks=1 00:15:03.687 --rc geninfo_unexecuted_blocks=1 00:15:03.687 00:15:03.687 ' 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:03.687 11:10:32 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:03.687 11:10:32 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:03.687 11:10:32 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:03.687 11:10:32 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:03.687 11:10:32 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:03.687 11:10:32 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:03.687 11:10:32 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:03.687 11:10:32 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:03.687 11:10:32 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.687 11:10:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.687 ************************************ 00:15:03.687 START TEST test_save_ublk_config 00:15:03.687 ************************************ 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82792 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82792 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82792 ']' 00:15:03.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.687 11:10:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:03.948 [2024-11-27 11:10:32.572697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:03.949 [2024-11-27 11:10:32.573429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82792 ] 00:15:03.949 [2024-11-27 11:10:32.722413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.949 [2024-11-27 11:10:32.787991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:04.892 [2024-11-27 11:10:33.426912] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:04.892 [2024-11-27 11:10:33.427283] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:04.892 malloc0 00:15:04.892 [2024-11-27 11:10:33.459033] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:04.892 [2024-11-27 11:10:33.459138] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:04.892 [2024-11-27 11:10:33.459148] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:04.892 [2024-11-27 11:10:33.459161] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:04.892 [2024-11-27 11:10:33.468021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:04.892 [2024-11-27 11:10:33.468056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:04.892 [2024-11-27 11:10:33.474920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:04.892 [2024-11-27 11:10:33.475040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:04.892 [2024-11-27 11:10:33.491913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:04.892 0 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.892 11:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:04.892 "subsystems": [ 00:15:04.892 { 00:15:04.892 "subsystem": "fsdev", 00:15:04.892 "config": [ 00:15:04.892 { 00:15:04.892 "method": "fsdev_set_opts", 00:15:04.892 "params": { 00:15:04.892 "fsdev_io_pool_size": 65535, 00:15:04.892 "fsdev_io_cache_size": 256 00:15:04.892 } 00:15:04.892 } 00:15:04.892 ] 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "subsystem": "keyring", 00:15:04.892 "config": [] 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "subsystem": "iobuf", 00:15:04.892 "config": [ 00:15:04.892 { 00:15:04.892 "method": "iobuf_set_options", 00:15:04.892 "params": { 00:15:04.892 "small_pool_count": 8192, 00:15:04.892 "large_pool_count": 1024, 00:15:04.892 "small_bufsize": 8192, 00:15:04.892 "large_bufsize": 135168 00:15:04.892 } 00:15:04.892 } 00:15:04.892 ] 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "subsystem": "sock", 00:15:04.892 "config": [ 00:15:04.892 { 00:15:04.892 "method": "sock_set_default_impl", 00:15:04.892 "params": { 00:15:04.892 "impl_name": "posix" 00:15:04.892 } 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "method": "sock_impl_set_options", 00:15:04.892 "params": { 00:15:04.892 "impl_name": "ssl", 00:15:04.892 "recv_buf_size": 4096, 00:15:04.892 "send_buf_size": 4096, 00:15:04.892 "enable_recv_pipe": true, 00:15:04.892 "enable_quickack": false, 00:15:04.892 "enable_placement_id": 0, 00:15:04.892 "enable_zerocopy_send_server": true, 00:15:04.892 "enable_zerocopy_send_client": false, 00:15:04.892 "zerocopy_threshold": 0, 00:15:04.892 "tls_version": 0, 00:15:04.892 "enable_ktls": false 00:15:04.892 } 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "method": "sock_impl_set_options", 00:15:04.892 "params": { 00:15:04.892 "impl_name": "posix", 00:15:04.892 "recv_buf_size": 2097152, 00:15:04.892 "send_buf_size": 2097152, 00:15:04.892 "enable_recv_pipe": true, 00:15:04.892 "enable_quickack": false, 00:15:04.892 "enable_placement_id": 0, 00:15:04.892 "enable_zerocopy_send_server": true, 00:15:04.892 "enable_zerocopy_send_client": false, 00:15:04.892 "zerocopy_threshold": 0, 00:15:04.892 "tls_version": 0, 00:15:04.892 "enable_ktls": false 00:15:04.892 } 00:15:04.892 } 00:15:04.892 ] 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "subsystem": "vmd", 00:15:04.892 "config": [] 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "subsystem": "accel", 00:15:04.892 "config": [ 00:15:04.892 { 00:15:04.892 "method": "accel_set_options", 00:15:04.892 "params": { 00:15:04.892 "small_cache_size": 128, 00:15:04.892 "large_cache_size": 16, 00:15:04.892 "task_count": 2048, 00:15:04.892 "sequence_count": 2048, 00:15:04.892 "buf_count": 2048 00:15:04.892 } 00:15:04.892 } 00:15:04.892 ] 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "subsystem": "bdev", 00:15:04.892 "config": [ 00:15:04.892 { 00:15:04.892 "method": "bdev_set_options", 00:15:04.892 "params": { 00:15:04.892 "bdev_io_pool_size": 65535, 00:15:04.892 "bdev_io_cache_size": 256, 00:15:04.892 "bdev_auto_examine": true, 00:15:04.892 "iobuf_small_cache_size": 128, 00:15:04.892 "iobuf_large_cache_size": 16 00:15:04.892 } 00:15:04.892 }, 00:15:04.892 { 00:15:04.892 "method": "bdev_raid_set_options", 00:15:04.892 "params": { 00:15:04.892 "process_window_size_kb": 1024, 00:15:04.892 "process_max_bandwidth_mb_sec": 0 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "bdev_iscsi_set_options", 00:15:04.893 "params": { 00:15:04.893 "timeout_sec": 30 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "bdev_nvme_set_options", 00:15:04.893 "params": { 00:15:04.893 "action_on_timeout": "none", 00:15:04.893 "timeout_us": 0, 00:15:04.893 "timeout_admin_us": 0, 00:15:04.893 "keep_alive_timeout_ms": 10000, 00:15:04.893 "arbitration_burst": 0, 00:15:04.893 "low_priority_weight": 0, 00:15:04.893 "medium_priority_weight": 0, 00:15:04.893 "high_priority_weight": 0, 00:15:04.893 "nvme_adminq_poll_period_us": 10000, 00:15:04.893 "nvme_ioq_poll_period_us": 0, 00:15:04.893 "io_queue_requests": 0, 00:15:04.893 "delay_cmd_submit": true, 00:15:04.893 "transport_retry_count": 4, 00:15:04.893 "bdev_retry_count": 3, 00:15:04.893 "transport_ack_timeout": 0, 00:15:04.893 "ctrlr_loss_timeout_sec": 0, 00:15:04.893 "reconnect_delay_sec": 0, 00:15:04.893 "fast_io_fail_timeout_sec": 0, 00:15:04.893 "disable_auto_failback": false, 00:15:04.893 "generate_uuids": false, 00:15:04.893 "transport_tos": 0, 00:15:04.893 "nvme_error_stat": false, 00:15:04.893 "rdma_srq_size": 0, 00:15:04.893 "io_path_stat": false, 00:15:04.893 "allow_accel_sequence": false, 00:15:04.893 "rdma_max_cq_size": 0, 00:15:04.893 "rdma_cm_event_timeout_ms": 0, 00:15:04.893 "dhchap_digests": [ 00:15:04.893 "sha256", 00:15:04.893 "sha384", 00:15:04.893 "sha512" 00:15:04.893 ], 00:15:04.893 "dhchap_dhgroups": [ 00:15:04.893 "null", 00:15:04.893 "ffdhe2048", 00:15:04.893 "ffdhe3072", 00:15:04.893 "ffdhe4096", 00:15:04.893 "ffdhe6144", 00:15:04.893 "ffdhe8192" 00:15:04.893 ] 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "bdev_nvme_set_hotplug", 00:15:04.893 "params": { 00:15:04.893 "period_us": 100000, 00:15:04.893 "enable": false 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "bdev_malloc_create", 00:15:04.893 "params": { 00:15:04.893 "name": "malloc0", 00:15:04.893 "num_blocks": 8192, 00:15:04.893 "block_size": 4096, 00:15:04.893 "physical_block_size": 4096, 00:15:04.893 "uuid": "c135ea6d-f2aa-4544-a7df-05b095de9fa9", 00:15:04.893 "optimal_io_boundary": 0, 00:15:04.893 "md_size": 0, 00:15:04.893 "dif_type": 0, 00:15:04.893 "dif_is_head_of_md": false, 00:15:04.893 "dif_pi_format": 0 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "bdev_wait_for_examine" 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "scsi", 00:15:04.893 "config": null 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "scheduler", 00:15:04.893 "config": [ 00:15:04.893 { 00:15:04.893 "method": "framework_set_scheduler", 00:15:04.893 "params": { 00:15:04.893 "name": "static" 00:15:04.893 } 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "vhost_scsi", 00:15:04.893 "config": [] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "vhost_blk", 00:15:04.893 "config": [] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "ublk", 00:15:04.893 "config": [ 00:15:04.893 { 00:15:04.893 "method": "ublk_create_target", 00:15:04.893 "params": { 00:15:04.893 "cpumask": "1" 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "ublk_start_disk", 00:15:04.893 "params": { 00:15:04.893 "bdev_name": "malloc0", 00:15:04.893 "ublk_id": 0, 00:15:04.893 "num_queues": 1, 00:15:04.893 "queue_depth": 128 00:15:04.893 } 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "nbd", 00:15:04.893 "config": [] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "nvmf", 00:15:04.893 "config": [ 00:15:04.893 { 00:15:04.893 "method": "nvmf_set_config", 00:15:04.893 "params": { 00:15:04.893 "discovery_filter": "match_any", 00:15:04.893 "admin_cmd_passthru": { 00:15:04.893 "identify_ctrlr": false 00:15:04.893 }, 00:15:04.893 "dhchap_digests": [ 00:15:04.893 "sha256", 00:15:04.893 "sha384", 00:15:04.893 "sha512" 00:15:04.893 ], 00:15:04.893 "dhchap_dhgroups": [ 00:15:04.893 "null", 00:15:04.893 "ffdhe2048", 00:15:04.893 "ffdhe3072", 00:15:04.893 "ffdhe4096", 00:15:04.893 "ffdhe6144", 00:15:04.893 "ffdhe8192" 00:15:04.893 ] 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "nvmf_set_max_subsystems", 00:15:04.893 "params": { 00:15:04.893 "max_subsystems": 1024 00:15:04.893 } 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "method": "nvmf_set_crdt", 00:15:04.893 "params": { 00:15:04.893 "crdt1": 0, 00:15:04.893 "crdt2": 0, 00:15:04.893 "crdt3": 0 00:15:04.893 } 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 }, 00:15:04.893 { 00:15:04.893 "subsystem": "iscsi", 00:15:04.893 "config": [ 00:15:04.893 { 00:15:04.893 "method": "iscsi_set_options", 00:15:04.893 "params": { 00:15:04.893 "node_base": "iqn.2016-06.io.spdk", 00:15:04.893 "max_sessions": 128, 00:15:04.893 "max_connections_per_session": 2, 00:15:04.893 "max_queue_depth": 64, 00:15:04.893 "default_time2wait": 2, 00:15:04.893 "default_time2retain": 20, 00:15:04.893 "first_burst_length": 8192, 00:15:04.893 "immediate_data": true, 00:15:04.893 "allow_duplicated_isid": false, 00:15:04.893 "error_recovery_level": 0, 00:15:04.893 "nop_timeout": 60, 00:15:04.893 "nop_in_interval": 30, 00:15:04.893 "disable_chap": false, 00:15:04.893 "require_chap": false, 00:15:04.893 "mutual_chap": false, 00:15:04.893 "chap_group": 0, 00:15:04.893 "max_large_datain_per_connection": 64, 00:15:04.893 "max_r2t_per_connection": 4, 00:15:04.893 "pdu_pool_size": 36864, 00:15:04.893 "immediate_data_pool_size": 16384, 00:15:04.893 "data_out_pool_size": 2048 00:15:04.893 } 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 } 00:15:04.893 ] 00:15:04.893 }' 00:15:04.893 11:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82792 00:15:04.893 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82792 ']' 00:15:04.893 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82792 00:15:04.893 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82792 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.154 killing process with pid 82792 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82792' 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82792 00:15:05.154 11:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82792 00:15:05.415 [2024-11-27 11:10:34.085622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:05.415 [2024-11-27 11:10:34.121024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:05.415 [2024-11-27 11:10:34.121170] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:05.415 [2024-11-27 11:10:34.130923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:05.415 [2024-11-27 11:10:34.130991] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:05.415 [2024-11-27 11:10:34.131000] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:05.415 [2024-11-27 11:10:34.131030] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:05.415 [2024-11-27 11:10:34.131179] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82830 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82830 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82830 ']' 00:15:05.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 11:10:34 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:05.989 "subsystems": [ 00:15:05.989 { 00:15:05.989 "subsystem": "fsdev", 00:15:05.989 "config": [ 00:15:05.989 { 00:15:05.989 "method": "fsdev_set_opts", 00:15:05.989 "params": { 00:15:05.989 "fsdev_io_pool_size": 65535, 00:15:05.989 "fsdev_io_cache_size": 256 00:15:05.989 } 00:15:05.989 } 00:15:05.989 ] 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "subsystem": "keyring", 00:15:05.989 "config": [] 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "subsystem": "iobuf", 00:15:05.989 "config": [ 00:15:05.989 { 00:15:05.989 "method": "iobuf_set_options", 00:15:05.989 "params": { 00:15:05.989 "small_pool_count": 8192, 00:15:05.989 "large_pool_count": 1024, 00:15:05.989 "small_bufsize": 8192, 00:15:05.989 "large_bufsize": 135168 00:15:05.989 } 00:15:05.989 } 00:15:05.989 ] 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "subsystem": "sock", 00:15:05.989 "config": [ 00:15:05.989 { 00:15:05.989 "method": "sock_set_default_impl", 00:15:05.989 "params": { 00:15:05.989 "impl_name": "posix" 00:15:05.989 } 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "method": "sock_impl_set_options", 00:15:05.989 "params": { 00:15:05.989 "impl_name": "ssl", 00:15:05.989 "recv_buf_size": 4096, 00:15:05.989 "send_buf_size": 4096, 00:15:05.989 "enable_recv_pipe": true, 00:15:05.989 "enable_quickack": false, 00:15:05.989 "enable_placement_id": 0, 00:15:05.989 "enable_zerocopy_send_server": true, 00:15:05.989 "enable_zerocopy_send_client": false, 00:15:05.989 "zerocopy_threshold": 0, 00:15:05.989 "tls_version": 0, 00:15:05.989 "enable_ktls": false 00:15:05.989 } 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "method": "sock_impl_set_options", 00:15:05.989 "params": { 00:15:05.989 "impl_name": "posix", 00:15:05.989 "recv_buf_size": 2097152, 00:15:05.989 "send_buf_size": 2097152, 00:15:05.989 "enable_recv_pipe": true, 00:15:05.989 "enable_quickack": false, 00:15:05.989 "enable_placement_id": 0, 00:15:05.989 "enable_zerocopy_send_server": true, 00:15:05.989 "enable_zerocopy_send_client": false, 00:15:05.989 "zerocopy_threshold": 0, 00:15:05.989 "tls_version": 0, 00:15:05.989 "enable_ktls": false 00:15:05.989 } 00:15:05.989 } 00:15:05.989 ] 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "subsystem": "vmd", 00:15:05.989 "config": [] 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "subsystem": "accel", 00:15:05.989 "config": [ 00:15:05.989 { 00:15:05.989 "method": "accel_set_options", 00:15:05.989 "params": { 00:15:05.989 "small_cache_size": 128, 00:15:05.989 "large_cache_size": 16, 00:15:05.989 "task_count": 2048, 00:15:05.989 "sequence_count": 2048, 00:15:05.989 "buf_count": 2048 00:15:05.989 } 00:15:05.989 } 00:15:05.989 ] 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "subsystem": "bdev", 00:15:05.989 "config": [ 00:15:05.989 { 00:15:05.989 "method": "bdev_set_options", 00:15:05.989 "params": { 00:15:05.989 "bdev_io_pool_size": 65535, 00:15:05.989 "bdev_io_cache_size": 256, 00:15:05.989 "bdev_auto_examine": true, 00:15:05.989 "iobuf_small_cache_size": 128, 00:15:05.989 "iobuf_large_cache_size": 16 00:15:05.989 } 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "method": "bdev_raid_set_options", 00:15:05.989 "params": { 00:15:05.989 "process_window_size_kb": 1024, 00:15:05.989 "process_max_bandwidth_mb_sec": 0 00:15:05.989 } 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "method": "bdev_iscsi_set_options", 00:15:05.989 "params": { 00:15:05.989 "timeout_sec": 30 00:15:05.989 } 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "method": "bdev_nvme_set_options", 00:15:05.989 "params": { 00:15:05.989 "action_on_timeout": "none", 00:15:05.989 "timeout_us": 0, 00:15:05.989 "timeout_admin_us": 0, 00:15:05.989 "keep_alive_timeout_ms": 10000, 00:15:05.989 "arbitration_burst": 0, 00:15:05.989 "low_priority_weight": 0, 00:15:05.989 "medium_priority_weight": 0, 00:15:05.989 "high_priority_weight": 0, 00:15:05.989 "nvme_adminq_poll_period_us": 10000, 00:15:05.989 "nvme_ioq_poll_period_us": 0, 00:15:05.989 "io_queue_requests": 0, 00:15:05.989 "delay_cmd_submit": true, 00:15:05.989 "transport_retry_count": 4, 00:15:05.989 "bdev_retry_count": 3, 00:15:05.989 "transport_ack_timeout": 0, 00:15:05.989 "ctrlr_loss_timeout_sec": 0, 00:15:05.989 "reconnect_delay_sec": 0, 00:15:05.989 "fast_io_fail_timeout_sec": 0, 00:15:05.989 "disable_auto_failback": false, 00:15:05.989 "generate_uuids": false, 00:15:05.989 "transport_tos": 0, 00:15:05.989 "nvme_error_stat": false, 00:15:05.989 "rdma_srq_size": 0, 00:15:05.989 "io_path_stat": false, 00:15:05.989 "allow_accel_sequence": false, 00:15:05.989 "rdma_max_cq_size": 0, 00:15:05.989 "rdma_cm_event_timeout_ms": 0, 00:15:05.989 "dhchap_digests": [ 00:15:05.989 "sha256", 00:15:05.989 "sha384", 00:15:05.989 "sha512" 00:15:05.989 ], 00:15:05.989 "dhchap_dhgroups": [ 00:15:05.989 "null", 00:15:05.989 "ffdhe2048", 00:15:05.989 "ffdhe3072", 00:15:05.990 "ffdhe4096", 00:15:05.990 "ffdhe6144", 00:15:05.990 "ffdhe8192" 00:15:05.990 ] 00:15:05.990 } 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "method": "bdev_nvme_set_hotplug", 00:15:05.990 "params": { 00:15:05.990 "period_us": 100000, 00:15:05.990 "enable": false 00:15:05.990 } 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "method": "bdev_malloc_create", 00:15:05.990 "params": { 00:15:05.990 "name": "malloc0", 00:15:05.990 "num_blocks": 8192, 00:15:05.990 "block_size": 4096, 00:15:05.990 "physical_block_size": 4096, 00:15:05.990 "uuid": "c135ea6d-f2aa-4544-a7df-05b095de9fa9", 00:15:05.990 "optimal_io_boundary": 0, 00:15:05.990 "md_size": 0, 00:15:05.990 "dif_type": 0, 00:15:05.990 "dif_is_head_of_md": false, 00:15:05.990 "dif_pi_format": 0 00:15:05.990 } 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "method": "bdev_wait_for_examine" 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "scsi", 00:15:05.990 "config": null 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "scheduler", 00:15:05.990 "config": [ 00:15:05.990 { 00:15:05.990 "method": "framework_set_scheduler", 00:15:05.990 "params": { 00:15:05.990 "name": "static" 00:15:05.990 } 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "vhost_scsi", 00:15:05.990 "config": [] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "vhost_blk", 00:15:05.990 "config": [] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "ublk", 00:15:05.990 "config": [ 00:15:05.990 { 00:15:05.990 "method": "ublk_create_target", 00:15:05.990 "params": { 00:15:05.990 "cpumask": "1" 00:15:05.990 } 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "method": "ublk_start_disk", 00:15:05.990 "params": { 00:15:05.990 "bdev_name": "malloc0", 00:15:05.990 "ublk_id": 0, 00:15:05.990 "num_queues": 1, 00:15:05.990 "queue_depth": 128 00:15:05.990 } 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "nbd", 00:15:05.990 "config": [] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "nvmf", 00:15:05.990 "config": [ 00:15:05.990 { 00:15:05.990 "method": "nvmf_set_config", 00:15:05.990 "params": { 00:15:05.990 "discovery_filter": "match_any", 00:15:05.990 "admin_cmd_passthru": { 00:15:05.990 "identify_ctrlr": false 00:15:05.990 }, 00:15:05.990 "dhchap_digests": [ 00:15:05.990 "sha256", 00:15:05.990 "sha384", 00:15:05.990 "sha512" 00:15:05.990 ], 00:15:05.990 "dhchap_dhgroups": [ 00:15:05.990 "null", 00:15:05.990 "ffdhe2048", 00:15:05.990 "ffdhe3072", 00:15:05.990 "ffdhe4096", 00:15:05.990 "ffdhe6144", 00:15:05.990 "ffdhe8192" 00:15:05.990 ] 00:15:05.990 } 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "method": "nvmf_set_max_subsystems", 00:15:05.990 "params": { 00:15:05.990 "max_subsystems": 1024 00:15:05.990 } 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "method": "nvmf_set_crdt", 00:15:05.990 "params": { 00:15:05.990 "crdt1": 0, 00:15:05.990 "crdt2": 0, 00:15:05.990 "crdt3": 0 00:15:05.990 } 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 }, 00:15:05.990 { 00:15:05.990 "subsystem": "iscsi", 00:15:05.990 "config": [ 00:15:05.990 { 00:15:05.990 "method": "iscsi_set_options", 00:15:05.990 "params": { 00:15:05.990 "node_base": "iqn.2016-06.io.spdk", 00:15:05.990 "max_sessions": 128, 00:15:05.990 "max_connections_per_session": 2, 00:15:05.990 "max_queue_depth": 64, 00:15:05.990 "default_time2wait": 2, 00:15:05.990 "default_time2retain": 20, 00:15:05.990 "first_burst_length": 8192, 00:15:05.990 "immediate_data": true, 00:15:05.990 "allow_duplicated_isid": false, 00:15:05.990 "error_recovery_level": 0, 00:15:05.990 "nop_timeout": 60, 00:15:05.990 "nop_in_interval": 30, 00:15:05.990 "disable_chap": false, 00:15:05.990 "require_chap": false, 00:15:05.990 "mutual_chap": false, 00:15:05.990 "chap_group": 0, 00:15:05.990 "max_large_datain_per_connection": 64, 00:15:05.990 "max_r2t_per_connection": 4, 00:15:05.990 "pdu_pool_size": 36864, 00:15:05.990 "immediate_data_pool_size": 16384, 00:15:05.990 "data_out_pool_size": 2048 00:15:05.990 } 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 } 00:15:05.990 ] 00:15:05.990 }' 00:15:05.990 [2024-11-27 11:10:34.713244] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:05.990 [2024-11-27 11:10:34.713401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82830 ] 00:15:05.990 [2024-11-27 11:10:34.862039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.252 [2024-11-27 11:10:34.921921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.514 [2024-11-27 11:10:35.308913] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:06.514 [2024-11-27 11:10:35.309280] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:06.514 [2024-11-27 11:10:35.317071] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:06.514 [2024-11-27 11:10:35.317164] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:06.514 [2024-11-27 11:10:35.317172] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:06.514 [2024-11-27 11:10:35.317180] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:06.514 [2024-11-27 11:10:35.326021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:06.514 [2024-11-27 11:10:35.326053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:06.514 [2024-11-27 11:10:35.332927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:06.514 [2024-11-27 11:10:35.333043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:06.514 [2024-11-27 11:10:35.349926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82830 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82830 ']' 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82830 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82830 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.775 killing process with pid 82830 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82830' 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82830 00:15:06.775 11:10:35 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82830 00:15:07.348 [2024-11-27 11:10:35.919532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.348 [2024-11-27 11:10:35.956920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.348 [2024-11-27 11:10:35.957088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.348 [2024-11-27 11:10:35.964942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.348 [2024-11-27 11:10:35.965009] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:07.348 [2024-11-27 11:10:35.965018] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:07.348 [2024-11-27 11:10:35.965057] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:07.348 [2024-11-27 11:10:35.965214] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:07.610 11:10:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:07.610 00:15:07.610 real 0m3.978s 00:15:07.610 user 0m2.664s 00:15:07.610 sys 0m1.979s 00:15:07.610 11:10:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.610 ************************************ 00:15:07.610 END TEST test_save_ublk_config 00:15:07.610 ************************************ 00:15:07.610 11:10:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:07.872 11:10:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82887 00:15:07.872 11:10:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.872 11:10:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82887 00:15:07.872 11:10:36 ublk -- common/autotest_common.sh@831 -- # '[' -z 82887 ']' 00:15:07.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.872 11:10:36 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.872 11:10:36 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.872 11:10:36 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.872 11:10:36 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.872 11:10:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.872 11:10:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:07.872 [2024-11-27 11:10:36.600188] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:07.872 [2024-11-27 11:10:36.600340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82887 ] 00:15:07.872 [2024-11-27 11:10:36.747789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:08.133 [2024-11-27 11:10:36.782334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.133 [2024-11-27 11:10:36.782370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.705 11:10:37 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.705 11:10:37 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:08.705 11:10:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:08.705 11:10:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:08.705 11:10:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.705 11:10:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.705 ************************************ 00:15:08.705 START TEST test_create_ublk 00:15:08.705 ************************************ 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:08.705 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.705 [2024-11-27 11:10:37.468911] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:08.705 [2024-11-27 11:10:37.470537] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.705 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:08.705 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.705 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:08.705 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.705 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.705 [2024-11-27 11:10:37.555041] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:08.705 [2024-11-27 11:10:37.555456] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:08.705 [2024-11-27 11:10:37.555471] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:08.705 [2024-11-27 11:10:37.555481] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.705 [2024-11-27 11:10:37.564168] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.705 [2024-11-27 11:10:37.564202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.705 [2024-11-27 11:10:37.570917] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.705 [2024-11-27 11:10:37.571435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:08.966 [2024-11-27 11:10:37.589918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.966 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:08.966 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.966 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.966 11:10:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:08.966 { 00:15:08.966 "ublk_device": "/dev/ublkb0", 00:15:08.966 "id": 0, 00:15:08.966 "queue_depth": 512, 00:15:08.966 "num_queues": 4, 00:15:08.966 "bdev_name": "Malloc0" 00:15:08.966 } 00:15:08.966 ]' 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:08.966 11:10:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:09.228 fio: verification read phase will never start because write phase uses all of runtime 00:15:09.228 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:09.228 fio-3.35 00:15:09.228 Starting 1 process 00:15:19.258 00:15:19.258 fio_test: (groupid=0, jobs=1): err= 0: pid=82927: Wed Nov 27 11:10:48 2024 00:15:19.258 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(647MiB/10001msec); 0 zone resets 00:15:19.258 clat (usec): min=32, max=3868, avg=59.66, stdev=81.67 00:15:19.258 lat (usec): min=32, max=3868, avg=60.08, stdev=81.69 00:15:19.258 clat percentiles (usec): 00:15:19.258 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:15:19.258 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 57], 00:15:19.258 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 66], 95.00th=[ 71], 00:15:19.258 | 99.00th=[ 94], 99.50th=[ 116], 99.90th=[ 1237], 99.95th=[ 2409], 00:15:19.258 | 99.99th=[ 3490] 00:15:19.258 bw ( KiB/s): min=57168, max=69920, per=100.00%, avg=66228.21, stdev=3837.09, samples=19 00:15:19.258 iops : min=14292, max=17480, avg=16557.05, stdev=959.27, samples=19 00:15:19.258 lat (usec) : 50=10.21%, 100=88.96%, 250=0.65%, 500=0.05%, 750=0.01% 00:15:19.258 lat (usec) : 1000=0.01% 00:15:19.258 lat (msec) : 2=0.04%, 4=0.07% 00:15:19.258 cpu : usr=2.88%, sys=12.73%, ctx=165575, majf=0, minf=797 00:15:19.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:19.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.258 issued rwts: total=0,165572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:19.258 00:15:19.258 Run status group 0 (all jobs): 00:15:19.258 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=647MiB (678MB), run=10001-10001msec 00:15:19.258 00:15:19.258 Disk stats (read/write): 00:15:19.258 ublkb0: ios=0/163795, merge=0/0, ticks=0/8398, in_queue=8399, util=99.09% 00:15:19.258 11:10:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.258 [2024-11-27 11:10:48.019582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:19.258 [2024-11-27 11:10:48.060949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:19.258 [2024-11-27 11:10:48.061610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:19.258 [2024-11-27 11:10:48.068915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:19.258 [2024-11-27 11:10:48.069160] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:19.258 [2024-11-27 11:10:48.069173] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.258 11:10:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.258 [2024-11-27 11:10:48.084994] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:19.258 request: 00:15:19.258 { 00:15:19.258 "ublk_id": 0, 00:15:19.258 "method": "ublk_stop_disk", 00:15:19.258 "req_id": 1 00:15:19.258 } 00:15:19.258 Got JSON-RPC error response 00:15:19.258 response: 00:15:19.258 { 00:15:19.258 "code": -19, 00:15:19.258 "message": "No such device" 00:15:19.258 } 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:19.258 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:19.258 11:10:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:19.259 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.259 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.259 [2024-11-27 11:10:48.100973] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:19.259 [2024-11-27 11:10:48.102677] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:19.259 [2024-11-27 11:10:48.102708] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:19.259 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.259 11:10:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:19.259 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.259 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.517 11:10:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:19.517 11:10:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:19.517 00:15:19.517 real 0m10.823s 00:15:19.517 user 0m0.599s 00:15:19.517 sys 0m1.354s 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.517 11:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.517 ************************************ 00:15:19.517 END TEST test_create_ublk 00:15:19.517 ************************************ 00:15:19.517 11:10:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:19.517 11:10:48 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:19.517 11:10:48 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.517 11:10:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.517 ************************************ 00:15:19.517 START TEST test_create_multi_ublk 00:15:19.517 ************************************ 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.517 [2024-11-27 11:10:48.336911] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:19.517 [2024-11-27 11:10:48.338047] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.517 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.776 [2024-11-27 11:10:48.429364] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:19.776 [2024-11-27 11:10:48.429690] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:19.776 [2024-11-27 11:10:48.429704] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:19.776 [2024-11-27 11:10:48.429710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:19.776 [2024-11-27 11:10:48.440972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:19.776 [2024-11-27 11:10:48.440992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:19.776 [2024-11-27 11:10:48.452924] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:19.776 [2024-11-27 11:10:48.453442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:19.776 [2024-11-27 11:10:48.467910] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:19.776 [2024-11-27 11:10:48.564012] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:19.776 [2024-11-27 11:10:48.564329] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:19.776 [2024-11-27 11:10:48.564341] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:19.776 [2024-11-27 11:10:48.564348] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:19.776 [2024-11-27 11:10:48.575920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:19.776 [2024-11-27 11:10:48.575941] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:19.776 [2024-11-27 11:10:48.587911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:19.776 [2024-11-27 11:10:48.588419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:19.776 [2024-11-27 11:10:48.623914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.776 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.035 [2024-11-27 11:10:48.732015] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:20.035 [2024-11-27 11:10:48.732332] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:20.035 [2024-11-27 11:10:48.732346] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:20.035 [2024-11-27 11:10:48.732352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:20.035 [2024-11-27 11:10:48.743932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:20.035 [2024-11-27 11:10:48.743949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:20.035 [2024-11-27 11:10:48.755920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:20.035 [2024-11-27 11:10:48.756431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:20.035 [2024-11-27 11:10:48.791916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.035 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.035 [2024-11-27 11:10:48.900007] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:20.035 [2024-11-27 11:10:48.900327] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:20.035 [2024-11-27 11:10:48.900340] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:20.035 [2024-11-27 11:10:48.900346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:20.035 [2024-11-27 11:10:48.911927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:20.035 [2024-11-27 11:10:48.911950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:20.294 [2024-11-27 11:10:48.923916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:20.294 [2024-11-27 11:10:48.924435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:20.294 [2024-11-27 11:10:48.936934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:20.294 { 00:15:20.294 "ublk_device": "/dev/ublkb0", 00:15:20.294 "id": 0, 00:15:20.294 "queue_depth": 512, 00:15:20.294 "num_queues": 4, 00:15:20.294 "bdev_name": "Malloc0" 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "ublk_device": "/dev/ublkb1", 00:15:20.294 "id": 1, 00:15:20.294 "queue_depth": 512, 00:15:20.294 "num_queues": 4, 00:15:20.294 "bdev_name": "Malloc1" 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "ublk_device": "/dev/ublkb2", 00:15:20.294 "id": 2, 00:15:20.294 "queue_depth": 512, 00:15:20.294 "num_queues": 4, 00:15:20.294 "bdev_name": "Malloc2" 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "ublk_device": "/dev/ublkb3", 00:15:20.294 "id": 3, 00:15:20.294 "queue_depth": 512, 00:15:20.294 "num_queues": 4, 00:15:20.294 "bdev_name": "Malloc3" 00:15:20.294 } 00:15:20.294 ]' 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.294 11:10:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:20.294 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.552 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.811 [2024-11-27 11:10:49.607972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.811 [2024-11-27 11:10:49.641476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:20.811 [2024-11-27 11:10:49.642566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:20.811 [2024-11-27 11:10:49.647912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:20.811 [2024-11-27 11:10:49.648149] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:20.811 [2024-11-27 11:10:49.648161] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.811 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:20.811 [2024-11-27 11:10:49.661989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:20.811 [2024-11-27 11:10:49.693438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.071 [2024-11-27 11:10:49.694509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:21.071 [2024-11-27 11:10:49.699922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:21.071 [2024-11-27 11:10:49.700169] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:21.071 [2024-11-27 11:10:49.700180] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.071 [2024-11-27 11:10:49.712996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:21.071 [2024-11-27 11:10:49.757952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.071 [2024-11-27 11:10:49.758660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:21.071 [2024-11-27 11:10:49.767918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:21.071 [2024-11-27 11:10:49.768156] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:21.071 [2024-11-27 11:10:49.768167] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.071 [2024-11-27 11:10:49.783966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:21.071 [2024-11-27 11:10:49.813381] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.071 [2024-11-27 11:10:49.814374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:21.071 [2024-11-27 11:10:49.819916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:21.071 [2024-11-27 11:10:49.820156] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:21.071 [2024-11-27 11:10:49.820167] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.071 11:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:21.329 [2024-11-27 11:10:50.011981] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.329 [2024-11-27 11:10:50.013463] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:21.329 [2024-11-27 11:10:50.013496] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:21.329 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.330 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.330 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.330 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.330 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:21.330 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.330 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:21.588 ************************************ 00:15:21.588 END TEST test_create_multi_ublk 00:15:21.588 ************************************ 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:21.588 00:15:21.588 real 0m2.097s 00:15:21.588 user 0m0.818s 00:15:21.588 sys 0m0.126s 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.588 11:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 11:10:50 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:21.588 11:10:50 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:21.588 11:10:50 ublk -- ublk/ublk.sh@130 -- # killprocess 82887 00:15:21.588 11:10:50 ublk -- common/autotest_common.sh@950 -- # '[' -z 82887 ']' 00:15:21.588 11:10:50 ublk -- common/autotest_common.sh@954 -- # kill -0 82887 00:15:21.588 11:10:50 ublk -- common/autotest_common.sh@955 -- # uname 00:15:21.588 11:10:50 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.588 11:10:50 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82887 00:15:21.847 killing process with pid 82887 00:15:21.847 11:10:50 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.847 11:10:50 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.847 11:10:50 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82887' 00:15:21.847 11:10:50 ublk -- common/autotest_common.sh@969 -- # kill 82887 00:15:21.847 11:10:50 ublk -- common/autotest_common.sh@974 -- # wait 82887 00:15:21.847 [2024-11-27 11:10:50.706555] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.847 [2024-11-27 11:10:50.706616] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:22.106 ************************************ 00:15:22.106 END TEST ublk 00:15:22.106 ************************************ 00:15:22.106 00:15:22.106 real 0m18.680s 00:15:22.106 user 0m28.662s 00:15:22.106 sys 0m7.867s 00:15:22.106 11:10:50 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.106 11:10:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 11:10:51 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:22.367 11:10:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:22.367 11:10:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.367 11:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.367 ************************************ 00:15:22.367 START TEST ublk_recovery 00:15:22.367 ************************************ 00:15:22.367 11:10:51 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:22.367 * Looking for test storage... 00:15:22.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:22.367 11:10:51 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:22.367 11:10:51 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:22.367 11:10:51 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:22.367 11:10:51 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:22.367 11:10:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:22.368 11:10:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.368 11:10:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:22.368 11:10:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.368 11:10:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.368 11:10:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.368 11:10:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.368 --rc genhtml_branch_coverage=1 00:15:22.368 --rc genhtml_function_coverage=1 00:15:22.368 --rc genhtml_legend=1 00:15:22.368 --rc geninfo_all_blocks=1 00:15:22.368 --rc geninfo_unexecuted_blocks=1 00:15:22.368 00:15:22.368 ' 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.368 --rc genhtml_branch_coverage=1 00:15:22.368 --rc genhtml_function_coverage=1 00:15:22.368 --rc genhtml_legend=1 00:15:22.368 --rc geninfo_all_blocks=1 00:15:22.368 --rc geninfo_unexecuted_blocks=1 00:15:22.368 00:15:22.368 ' 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.368 --rc genhtml_branch_coverage=1 00:15:22.368 --rc genhtml_function_coverage=1 00:15:22.368 --rc genhtml_legend=1 00:15:22.368 --rc geninfo_all_blocks=1 00:15:22.368 --rc geninfo_unexecuted_blocks=1 00:15:22.368 00:15:22.368 ' 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:22.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.368 --rc genhtml_branch_coverage=1 00:15:22.368 --rc genhtml_function_coverage=1 00:15:22.368 --rc genhtml_legend=1 00:15:22.368 --rc geninfo_all_blocks=1 00:15:22.368 --rc geninfo_unexecuted_blocks=1 00:15:22.368 00:15:22.368 ' 00:15:22.368 11:10:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:22.368 11:10:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:22.368 11:10:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:22.368 11:10:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83249 00:15:22.368 11:10:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:22.368 11:10:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83249 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83249 ']' 00:15:22.368 11:10:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.368 11:10:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.627 [2024-11-27 11:10:51.261696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:22.627 [2024-11-27 11:10:51.261978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83249 ] 00:15:22.627 [2024-11-27 11:10:51.409685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:22.627 [2024-11-27 11:10:51.468452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.627 [2024-11-27 11:10:51.468526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.560 11:10:52 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:23.561 11:10:52 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.561 [2024-11-27 11:10:52.096908] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:23.561 [2024-11-27 11:10:52.098136] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.561 11:10:52 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.561 malloc0 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.561 11:10:52 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.561 [2024-11-27 11:10:52.137025] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:23.561 [2024-11-27 11:10:52.137118] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:23.561 [2024-11-27 11:10:52.137131] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:23.561 [2024-11-27 11:10:52.137138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:23.561 [2024-11-27 11:10:52.146003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:23.561 [2024-11-27 11:10:52.146027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:23.561 [2024-11-27 11:10:52.152912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:23.561 [2024-11-27 11:10:52.153058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:23.561 [2024-11-27 11:10:52.167932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:23.561 1 00:15:23.561 11:10:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.561 11:10:52 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:24.493 11:10:53 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83283 00:15:24.493 11:10:53 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:24.493 11:10:53 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:24.493 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.493 fio-3.35 00:15:24.493 Starting 1 process 00:15:29.759 11:10:58 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83249 00:15:29.759 11:10:58 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:35.049 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83249 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:35.049 11:11:03 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83394 00:15:35.049 11:11:03 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.049 11:11:03 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83394 00:15:35.050 11:11:03 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:35.050 11:11:03 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83394 ']' 00:15:35.050 11:11:03 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.050 11:11:03 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.050 11:11:03 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.050 11:11:03 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.050 11:11:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.050 [2024-11-27 11:11:03.261748] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:35.050 [2024-11-27 11:11:03.261878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83394 ] 00:15:35.050 [2024-11-27 11:11:03.411105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:35.050 [2024-11-27 11:11:03.455029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.050 [2024-11-27 11:11:03.455064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:35.311 11:11:04 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.311 [2024-11-27 11:11:04.097909] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:35.311 [2024-11-27 11:11:04.099140] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.311 11:11:04 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.311 malloc0 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.311 11:11:04 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.311 [2024-11-27 11:11:04.138016] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:35.311 [2024-11-27 11:11:04.138050] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:35.311 [2024-11-27 11:11:04.138057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:35.311 [2024-11-27 11:11:04.145947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:35.311 [2024-11-27 11:11:04.145965] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:15:35.311 [2024-11-27 11:11:04.145983] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:35.311 [2024-11-27 11:11:04.146057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:35.311 1 00:15:35.311 11:11:04 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.311 11:11:04 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83283 00:15:35.311 [2024-11-27 11:11:04.153915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:35.311 [2024-11-27 11:11:04.158277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:35.311 [2024-11-27 11:11:04.168055] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:35.311 [2024-11-27 11:11:04.168071] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:31.561 00:16:31.561 fio_test: (groupid=0, jobs=1): err= 0: pid=83286: Wed Nov 27 11:11:53 2024 00:16:31.561 read: IOPS=26.9k, BW=105MiB/s (110MB/s)(6303MiB/60002msec) 00:16:31.561 slat (nsec): min=1089, max=2402.5k, avg=5186.26, stdev=3480.32 00:16:31.561 clat (usec): min=664, max=5995.2k, avg=2335.89, stdev=37438.41 00:16:31.561 lat (usec): min=670, max=5995.2k, avg=2341.08, stdev=37438.41 00:16:31.561 clat percentiles (usec): 00:16:31.561 | 1.00th=[ 1663], 5.00th=[ 1827], 10.00th=[ 1876], 20.00th=[ 1893], 00:16:31.561 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:16:31.561 | 70.00th=[ 2008], 80.00th=[ 2089], 90.00th=[ 2409], 95.00th=[ 3032], 00:16:31.561 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 6390], 99.95th=[ 8029], 00:16:31.561 | 99.99th=[13173] 00:16:31.561 bw ( KiB/s): min=24440, max=125904, per=100.00%, avg=118483.62, stdev=13725.10, samples=108 00:16:31.561 iops : min= 6110, max=31476, avg=29620.90, stdev=3431.27, samples=108 00:16:31.561 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(6298MiB/60002msec); 0 zone resets 00:16:31.561 slat (nsec): min=1122, max=1136.8k, avg=5280.37, stdev=3118.71 00:16:31.561 clat (usec): min=637, max=5995.4k, avg=2414.03, stdev=38045.12 00:16:31.561 lat (usec): min=643, max=5995.5k, avg=2419.31, stdev=38045.11 00:16:31.561 clat percentiles (usec): 00:16:31.561 | 1.00th=[ 1680], 5.00th=[ 1876], 10.00th=[ 1942], 20.00th=[ 1975], 00:16:31.561 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2057], 00:16:31.561 | 70.00th=[ 2089], 80.00th=[ 2180], 90.00th=[ 2474], 95.00th=[ 2999], 00:16:31.561 | 99.00th=[ 4817], 99.50th=[ 5211], 99.90th=[ 6521], 99.95th=[ 8225], 00:16:31.561 | 99.99th=[13304] 00:16:31.561 bw ( KiB/s): min=25272, max=126080, per=100.00%, avg=118382.23, stdev=13621.13, samples=108 00:16:31.561 iops : min= 6318, max=31520, avg=29595.55, stdev=3405.28, samples=108 00:16:31.561 lat (usec) : 750=0.01%, 1000=0.01% 00:16:31.562 lat (msec) : 2=47.72%, 4=49.96%, 10=2.27%, 20=0.03%, >=2000=0.01% 00:16:31.562 cpu : usr=6.38%, sys=28.35%, ctx=108245, majf=0, minf=13 00:16:31.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:31.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.562 issued rwts: total=1613688,1612339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.562 00:16:31.562 Run status group 0 (all jobs): 00:16:31.562 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6303MiB (6610MB), run=60002-60002msec 00:16:31.562 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6298MiB (6604MB), run=60002-60002msec 00:16:31.562 00:16:31.562 Disk stats (read/write): 00:16:31.562 ublkb1: ios=1610487/1609123, merge=0/0, ticks=3647072/3642131, in_queue=7289204, util=99.90% 00:16:31.562 11:11:53 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.562 [2024-11-27 11:11:53.427706] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:31.562 [2024-11-27 11:11:53.457021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:31.562 [2024-11-27 11:11:53.457254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:31.562 [2024-11-27 11:11:53.465939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:31.562 [2024-11-27 11:11:53.469988] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:31.562 [2024-11-27 11:11:53.469999] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.562 11:11:53 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.562 [2024-11-27 11:11:53.474017] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.562 [2024-11-27 11:11:53.479592] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:31.562 [2024-11-27 11:11:53.479690] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.562 11:11:53 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:31.562 11:11:53 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:31.562 11:11:53 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83394 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83394 ']' 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83394 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83394 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83394' 00:16:31.562 killing process with pid 83394 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83394 00:16:31.562 11:11:53 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83394 00:16:31.562 [2024-11-27 11:11:53.689035] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.562 [2024-11-27 11:11:53.689089] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:31.562 00:16:31.562 real 1m2.971s 00:16:31.562 user 1m41.793s 00:16:31.562 sys 0m34.291s 00:16:31.562 11:11:54 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.562 ************************************ 00:16:31.562 END TEST ublk_recovery 00:16:31.562 ************************************ 00:16:31.562 11:11:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.562 11:11:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:31.562 11:11:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.562 11:11:54 -- common/autotest_common.sh@10 -- # set +x 00:16:31.562 11:11:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:31.562 11:11:54 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:31.562 11:11:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:31.562 11:11:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.562 11:11:54 -- common/autotest_common.sh@10 -- # set +x 00:16:31.562 ************************************ 00:16:31.562 START TEST ftl 00:16:31.562 ************************************ 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:31.562 * Looking for test storage... 00:16:31.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.562 11:11:54 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.562 11:11:54 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.562 11:11:54 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.562 11:11:54 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.562 11:11:54 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.562 11:11:54 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:31.562 11:11:54 ftl -- scripts/common.sh@345 -- # : 1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.562 11:11:54 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.562 11:11:54 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@353 -- # local d=1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.562 11:11:54 ftl -- scripts/common.sh@355 -- # echo 1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.562 11:11:54 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@353 -- # local d=2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.562 11:11:54 ftl -- scripts/common.sh@355 -- # echo 2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.562 11:11:54 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.562 11:11:54 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.562 11:11:54 ftl -- scripts/common.sh@368 -- # return 0 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.562 --rc genhtml_branch_coverage=1 00:16:31.562 --rc genhtml_function_coverage=1 00:16:31.562 --rc genhtml_legend=1 00:16:31.562 --rc geninfo_all_blocks=1 00:16:31.562 --rc geninfo_unexecuted_blocks=1 00:16:31.562 00:16:31.562 ' 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.562 --rc genhtml_branch_coverage=1 00:16:31.562 --rc genhtml_function_coverage=1 00:16:31.562 --rc genhtml_legend=1 00:16:31.562 --rc geninfo_all_blocks=1 00:16:31.562 --rc geninfo_unexecuted_blocks=1 00:16:31.562 00:16:31.562 ' 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.562 --rc genhtml_branch_coverage=1 00:16:31.562 --rc genhtml_function_coverage=1 00:16:31.562 --rc genhtml_legend=1 00:16:31.562 --rc geninfo_all_blocks=1 00:16:31.562 --rc geninfo_unexecuted_blocks=1 00:16:31.562 00:16:31.562 ' 00:16:31.562 11:11:54 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.562 --rc genhtml_branch_coverage=1 00:16:31.562 --rc genhtml_function_coverage=1 00:16:31.562 --rc genhtml_legend=1 00:16:31.562 --rc geninfo_all_blocks=1 00:16:31.562 --rc geninfo_unexecuted_blocks=1 00:16:31.562 00:16:31.562 ' 00:16:31.562 11:11:54 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:31.562 11:11:54 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:31.562 11:11:54 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.562 11:11:54 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.562 11:11:54 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:31.562 11:11:54 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:31.562 11:11:54 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.562 11:11:54 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:31.562 11:11:54 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:31.562 11:11:54 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.562 11:11:54 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.562 11:11:54 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:31.563 11:11:54 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:31.563 11:11:54 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:31.563 11:11:54 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:31.563 11:11:54 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:31.563 11:11:54 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:31.563 11:11:54 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.563 11:11:54 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.563 11:11:54 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:31.563 11:11:54 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:31.563 11:11:54 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:31.563 11:11:54 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:31.563 11:11:54 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:31.563 11:11:54 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:31.563 11:11:54 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:31.563 11:11:54 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:31.563 11:11:54 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:31.563 11:11:54 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:31.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:31.563 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:31.563 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:31.563 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:31.563 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84187 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84187 00:16:31.563 11:11:54 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:31.563 11:11:54 ftl -- common/autotest_common.sh@831 -- # '[' -z 84187 ']' 00:16:31.563 11:11:54 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.563 11:11:54 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.563 11:11:54 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.563 11:11:54 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.563 11:11:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 [2024-11-27 11:11:54.890050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:31.563 [2024-11-27 11:11:54.890452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84187 ] 00:16:31.563 [2024-11-27 11:11:55.045120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.563 [2024-11-27 11:11:55.090868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.563 11:11:55 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.563 11:11:55 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:31.563 11:11:55 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:31.563 11:11:55 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:31.563 11:11:56 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:31.563 11:11:56 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:31.563 11:11:56 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:31.563 11:11:56 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:31.563 11:11:56 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@50 -- # break 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@63 -- # break 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@66 -- # killprocess 84187 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@950 -- # '[' -z 84187 ']' 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@954 -- # kill -0 84187 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@955 -- # uname 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84187 00:16:31.563 killing process with pid 84187 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84187' 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@969 -- # kill 84187 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@974 -- # wait 84187 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:31.563 11:11:57 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.563 11:11:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:31.563 ************************************ 00:16:31.563 START TEST ftl_fio_basic 00:16:31.563 ************************************ 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:31.563 * Looking for test storage... 00:16:31.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:31.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.563 --rc genhtml_branch_coverage=1 00:16:31.563 --rc genhtml_function_coverage=1 00:16:31.563 --rc genhtml_legend=1 00:16:31.563 --rc geninfo_all_blocks=1 00:16:31.563 --rc geninfo_unexecuted_blocks=1 00:16:31.563 00:16:31.563 ' 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:31.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.563 --rc genhtml_branch_coverage=1 00:16:31.563 --rc genhtml_function_coverage=1 00:16:31.563 --rc genhtml_legend=1 00:16:31.563 --rc geninfo_all_blocks=1 00:16:31.563 --rc geninfo_unexecuted_blocks=1 00:16:31.563 00:16:31.563 ' 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:31.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.563 --rc genhtml_branch_coverage=1 00:16:31.563 --rc genhtml_function_coverage=1 00:16:31.563 --rc genhtml_legend=1 00:16:31.563 --rc geninfo_all_blocks=1 00:16:31.563 --rc geninfo_unexecuted_blocks=1 00:16:31.563 00:16:31.563 ' 00:16:31.563 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:31.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.564 --rc genhtml_branch_coverage=1 00:16:31.564 --rc genhtml_function_coverage=1 00:16:31.564 --rc genhtml_legend=1 00:16:31.564 --rc geninfo_all_blocks=1 00:16:31.564 --rc geninfo_unexecuted_blocks=1 00:16:31.564 00:16:31.564 ' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84308 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84308 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84308 ']' 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.564 11:11:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:31.564 [2024-11-27 11:11:57.894192] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:31.564 [2024-11-27 11:11:57.894594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84308 ] 00:16:31.564 [2024-11-27 11:11:58.041383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.564 [2024-11-27 11:11:58.092474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.564 [2024-11-27 11:11:58.092788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.564 [2024-11-27 11:11:58.092856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:31.564 11:11:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:31.564 { 00:16:31.564 "name": "nvme0n1", 00:16:31.564 "aliases": [ 00:16:31.564 "995c9f15-bfd6-4e21-9f2a-fa2934e035b7" 00:16:31.564 ], 00:16:31.564 "product_name": "NVMe disk", 00:16:31.564 "block_size": 4096, 00:16:31.564 "num_blocks": 1310720, 00:16:31.564 "uuid": "995c9f15-bfd6-4e21-9f2a-fa2934e035b7", 00:16:31.564 "numa_id": -1, 00:16:31.564 "assigned_rate_limits": { 00:16:31.564 "rw_ios_per_sec": 0, 00:16:31.564 "rw_mbytes_per_sec": 0, 00:16:31.564 "r_mbytes_per_sec": 0, 00:16:31.564 "w_mbytes_per_sec": 0 00:16:31.564 }, 00:16:31.564 "claimed": false, 00:16:31.564 "zoned": false, 00:16:31.564 "supported_io_types": { 00:16:31.564 "read": true, 00:16:31.564 "write": true, 00:16:31.564 "unmap": true, 00:16:31.564 "flush": true, 00:16:31.564 "reset": true, 00:16:31.564 "nvme_admin": true, 00:16:31.564 "nvme_io": true, 00:16:31.564 "nvme_io_md": false, 00:16:31.564 "write_zeroes": true, 00:16:31.564 "zcopy": false, 00:16:31.564 "get_zone_info": false, 00:16:31.564 "zone_management": false, 00:16:31.564 "zone_append": false, 00:16:31.564 "compare": true, 00:16:31.564 "compare_and_write": false, 00:16:31.564 "abort": true, 00:16:31.564 "seek_hole": false, 00:16:31.564 "seek_data": false, 00:16:31.564 "copy": true, 00:16:31.564 "nvme_iov_md": false 00:16:31.564 }, 00:16:31.564 "driver_specific": { 00:16:31.564 "nvme": [ 00:16:31.564 { 00:16:31.564 "pci_address": "0000:00:11.0", 00:16:31.564 "trid": { 00:16:31.564 "trtype": "PCIe", 00:16:31.564 "traddr": "0000:00:11.0" 00:16:31.564 }, 00:16:31.564 "ctrlr_data": { 00:16:31.564 "cntlid": 0, 00:16:31.564 "vendor_id": "0x1b36", 00:16:31.564 "model_number": "QEMU NVMe Ctrl", 00:16:31.564 "serial_number": "12341", 00:16:31.564 "firmware_revision": "8.0.0", 00:16:31.564 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:31.564 "oacs": { 00:16:31.564 "security": 0, 00:16:31.564 "format": 1, 00:16:31.564 "firmware": 0, 00:16:31.564 "ns_manage": 1 00:16:31.564 }, 00:16:31.564 "multi_ctrlr": false, 00:16:31.564 "ana_reporting": false 00:16:31.564 }, 00:16:31.564 "vs": { 00:16:31.564 "nvme_version": "1.4" 00:16:31.564 }, 00:16:31.564 "ns_data": { 00:16:31.564 "id": 1, 00:16:31.564 "can_share": false 00:16:31.564 } 00:16:31.564 } 00:16:31.564 ], 00:16:31.564 "mp_policy": "active_passive" 00:16:31.564 } 00:16:31.564 } 00:16:31.564 ]' 00:16:31.564 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=95719df7-e8f8-4428-8680-063cd4f05e68 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 95719df7-e8f8-4428-8680-063cd4f05e68 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:31.565 11:11:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:31.565 { 00:16:31.565 "name": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:31.565 "aliases": [ 00:16:31.565 "lvs/nvme0n1p0" 00:16:31.565 ], 00:16:31.565 "product_name": "Logical Volume", 00:16:31.565 "block_size": 4096, 00:16:31.565 "num_blocks": 26476544, 00:16:31.565 "uuid": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:31.565 "assigned_rate_limits": { 00:16:31.565 "rw_ios_per_sec": 0, 00:16:31.565 "rw_mbytes_per_sec": 0, 00:16:31.565 "r_mbytes_per_sec": 0, 00:16:31.565 "w_mbytes_per_sec": 0 00:16:31.565 }, 00:16:31.565 "claimed": false, 00:16:31.565 "zoned": false, 00:16:31.565 "supported_io_types": { 00:16:31.565 "read": true, 00:16:31.565 "write": true, 00:16:31.565 "unmap": true, 00:16:31.565 "flush": false, 00:16:31.565 "reset": true, 00:16:31.565 "nvme_admin": false, 00:16:31.565 "nvme_io": false, 00:16:31.565 "nvme_io_md": false, 00:16:31.565 "write_zeroes": true, 00:16:31.565 "zcopy": false, 00:16:31.565 "get_zone_info": false, 00:16:31.565 "zone_management": false, 00:16:31.565 "zone_append": false, 00:16:31.565 "compare": false, 00:16:31.565 "compare_and_write": false, 00:16:31.565 "abort": false, 00:16:31.565 "seek_hole": true, 00:16:31.565 "seek_data": true, 00:16:31.565 "copy": false, 00:16:31.565 "nvme_iov_md": false 00:16:31.565 }, 00:16:31.565 "driver_specific": { 00:16:31.565 "lvol": { 00:16:31.565 "lvol_store_uuid": "95719df7-e8f8-4428-8680-063cd4f05e68", 00:16:31.565 "base_bdev": "nvme0n1", 00:16:31.565 "thin_provision": true, 00:16:31.565 "num_allocated_clusters": 0, 00:16:31.565 "snapshot": false, 00:16:31.565 "clone": false, 00:16:31.565 "esnap_clone": false 00:16:31.565 } 00:16:31.565 } 00:16:31.565 } 00:16:31.565 ]' 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=e5accb64-9824-4362-a531-b39683f4c685 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:31.565 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5accb64-9824-4362-a531-b39683f4c685 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:31.824 { 00:16:31.824 "name": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:31.824 "aliases": [ 00:16:31.824 "lvs/nvme0n1p0" 00:16:31.824 ], 00:16:31.824 "product_name": "Logical Volume", 00:16:31.824 "block_size": 4096, 00:16:31.824 "num_blocks": 26476544, 00:16:31.824 "uuid": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:31.824 "assigned_rate_limits": { 00:16:31.824 "rw_ios_per_sec": 0, 00:16:31.824 "rw_mbytes_per_sec": 0, 00:16:31.824 "r_mbytes_per_sec": 0, 00:16:31.824 "w_mbytes_per_sec": 0 00:16:31.824 }, 00:16:31.824 "claimed": false, 00:16:31.824 "zoned": false, 00:16:31.824 "supported_io_types": { 00:16:31.824 "read": true, 00:16:31.824 "write": true, 00:16:31.824 "unmap": true, 00:16:31.824 "flush": false, 00:16:31.824 "reset": true, 00:16:31.824 "nvme_admin": false, 00:16:31.824 "nvme_io": false, 00:16:31.824 "nvme_io_md": false, 00:16:31.824 "write_zeroes": true, 00:16:31.824 "zcopy": false, 00:16:31.824 "get_zone_info": false, 00:16:31.824 "zone_management": false, 00:16:31.824 "zone_append": false, 00:16:31.824 "compare": false, 00:16:31.824 "compare_and_write": false, 00:16:31.824 "abort": false, 00:16:31.824 "seek_hole": true, 00:16:31.824 "seek_data": true, 00:16:31.824 "copy": false, 00:16:31.824 "nvme_iov_md": false 00:16:31.824 }, 00:16:31.824 "driver_specific": { 00:16:31.824 "lvol": { 00:16:31.824 "lvol_store_uuid": "95719df7-e8f8-4428-8680-063cd4f05e68", 00:16:31.824 "base_bdev": "nvme0n1", 00:16:31.824 "thin_provision": true, 00:16:31.824 "num_allocated_clusters": 0, 00:16:31.824 "snapshot": false, 00:16:31.824 "clone": false, 00:16:31.824 "esnap_clone": false 00:16:31.824 } 00:16:31.824 } 00:16:31.824 } 00:16:31.824 ]' 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:31.824 11:12:00 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:32.083 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e5accb64-9824-4362-a531-b39683f4c685 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=e5accb64-9824-4362-a531-b39683f4c685 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:32.083 11:12:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5accb64-9824-4362-a531-b39683f4c685 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:32.342 { 00:16:32.342 "name": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:32.342 "aliases": [ 00:16:32.342 "lvs/nvme0n1p0" 00:16:32.342 ], 00:16:32.342 "product_name": "Logical Volume", 00:16:32.342 "block_size": 4096, 00:16:32.342 "num_blocks": 26476544, 00:16:32.342 "uuid": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:32.342 "assigned_rate_limits": { 00:16:32.342 "rw_ios_per_sec": 0, 00:16:32.342 "rw_mbytes_per_sec": 0, 00:16:32.342 "r_mbytes_per_sec": 0, 00:16:32.342 "w_mbytes_per_sec": 0 00:16:32.342 }, 00:16:32.342 "claimed": false, 00:16:32.342 "zoned": false, 00:16:32.342 "supported_io_types": { 00:16:32.342 "read": true, 00:16:32.342 "write": true, 00:16:32.342 "unmap": true, 00:16:32.342 "flush": false, 00:16:32.342 "reset": true, 00:16:32.342 "nvme_admin": false, 00:16:32.342 "nvme_io": false, 00:16:32.342 "nvme_io_md": false, 00:16:32.342 "write_zeroes": true, 00:16:32.342 "zcopy": false, 00:16:32.342 "get_zone_info": false, 00:16:32.342 "zone_management": false, 00:16:32.342 "zone_append": false, 00:16:32.342 "compare": false, 00:16:32.342 "compare_and_write": false, 00:16:32.342 "abort": false, 00:16:32.342 "seek_hole": true, 00:16:32.342 "seek_data": true, 00:16:32.342 "copy": false, 00:16:32.342 "nvme_iov_md": false 00:16:32.342 }, 00:16:32.342 "driver_specific": { 00:16:32.342 "lvol": { 00:16:32.342 "lvol_store_uuid": "95719df7-e8f8-4428-8680-063cd4f05e68", 00:16:32.342 "base_bdev": "nvme0n1", 00:16:32.342 "thin_provision": true, 00:16:32.342 "num_allocated_clusters": 0, 00:16:32.342 "snapshot": false, 00:16:32.342 "clone": false, 00:16:32.342 "esnap_clone": false 00:16:32.342 } 00:16:32.342 } 00:16:32.342 } 00:16:32.342 ]' 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:32.342 11:12:01 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e5accb64-9824-4362-a531-b39683f4c685 -c nvc0n1p0 --l2p_dram_limit 60 00:16:32.602 [2024-11-27 11:12:01.370263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.370311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:32.602 [2024-11-27 11:12:01.370332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:32.602 [2024-11-27 11:12:01.370349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.370408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.370417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.602 [2024-11-27 11:12:01.370426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:32.602 [2024-11-27 11:12:01.370436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.370480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:32.602 [2024-11-27 11:12:01.370713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:32.602 [2024-11-27 11:12:01.370727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.370734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.602 [2024-11-27 11:12:01.370750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:32.602 [2024-11-27 11:12:01.370757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.370791] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2b2ffe3c-f04d-449c-b6f6-9f0c7ea4076c 00:16:32.602 [2024-11-27 11:12:01.372079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.372223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:32.602 [2024-11-27 11:12:01.372243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:32.602 [2024-11-27 11:12:01.372259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.379116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.379228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.602 [2024-11-27 11:12:01.379243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.770 ms 00:16:32.602 [2024-11-27 11:12:01.379250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.379346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.379370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.602 [2024-11-27 11:12:01.379379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:32.602 [2024-11-27 11:12:01.379385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.379434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.379442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:32.602 [2024-11-27 11:12:01.379450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:32.602 [2024-11-27 11:12:01.379456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.379486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:32.602 [2024-11-27 11:12:01.381165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.381192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.602 [2024-11-27 11:12:01.381200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:16:32.602 [2024-11-27 11:12:01.381208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.381242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.381251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:32.602 [2024-11-27 11:12:01.381258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:32.602 [2024-11-27 11:12:01.381268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.381294] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:32.602 [2024-11-27 11:12:01.381475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:32.602 [2024-11-27 11:12:01.381496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:32.602 [2024-11-27 11:12:01.381507] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:32.602 [2024-11-27 11:12:01.381515] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:32.602 [2024-11-27 11:12:01.381524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:32.602 [2024-11-27 11:12:01.381539] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:32.602 [2024-11-27 11:12:01.381551] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:32.602 [2024-11-27 11:12:01.381556] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:32.602 [2024-11-27 11:12:01.381564] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:32.602 [2024-11-27 11:12:01.381571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.381578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:32.602 [2024-11-27 11:12:01.381584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:32.602 [2024-11-27 11:12:01.381592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.381661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.602 [2024-11-27 11:12:01.381670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:32.602 [2024-11-27 11:12:01.381676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:32.602 [2024-11-27 11:12:01.381683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.602 [2024-11-27 11:12:01.381773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:32.602 [2024-11-27 11:12:01.381782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:32.602 [2024-11-27 11:12:01.381789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.602 [2024-11-27 11:12:01.381796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.602 [2024-11-27 11:12:01.381802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:32.602 [2024-11-27 11:12:01.381808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:32.602 [2024-11-27 11:12:01.381814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:32.602 [2024-11-27 11:12:01.381821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:32.602 [2024-11-27 11:12:01.381826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:32.602 [2024-11-27 11:12:01.381832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.603 [2024-11-27 11:12:01.381837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:32.603 [2024-11-27 11:12:01.381844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:32.603 [2024-11-27 11:12:01.381849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:32.603 [2024-11-27 11:12:01.381859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:32.603 [2024-11-27 11:12:01.381864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:32.603 [2024-11-27 11:12:01.381870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.603 [2024-11-27 11:12:01.381875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:32.603 [2024-11-27 11:12:01.381882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:32.603 [2024-11-27 11:12:01.381886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.603 [2024-11-27 11:12:01.381911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:32.603 [2024-11-27 11:12:01.381928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:32.603 [2024-11-27 11:12:01.381935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.603 [2024-11-27 11:12:01.381940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:32.603 [2024-11-27 11:12:01.381947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:32.603 [2024-11-27 11:12:01.381953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.603 [2024-11-27 11:12:01.381964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:32.603 [2024-11-27 11:12:01.381972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:32.603 [2024-11-27 11:12:01.381979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.603 [2024-11-27 11:12:01.381984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:32.603 [2024-11-27 11:12:01.381993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:32.603 [2024-11-27 11:12:01.381998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:32.603 [2024-11-27 11:12:01.382004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:32.603 [2024-11-27 11:12:01.382009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:32.603 [2024-11-27 11:12:01.382016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.603 [2024-11-27 11:12:01.382022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:32.603 [2024-11-27 11:12:01.382028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:32.603 [2024-11-27 11:12:01.382034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:32.603 [2024-11-27 11:12:01.382040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:32.603 [2024-11-27 11:12:01.382045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:32.603 [2024-11-27 11:12:01.382053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.603 [2024-11-27 11:12:01.382058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:32.603 [2024-11-27 11:12:01.382064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:32.603 [2024-11-27 11:12:01.382069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.603 [2024-11-27 11:12:01.382075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:32.603 [2024-11-27 11:12:01.382081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:32.603 [2024-11-27 11:12:01.382091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:32.603 [2024-11-27 11:12:01.382096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:32.603 [2024-11-27 11:12:01.382104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:32.603 [2024-11-27 11:12:01.382109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:32.603 [2024-11-27 11:12:01.382115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:32.603 [2024-11-27 11:12:01.382120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:32.603 [2024-11-27 11:12:01.382126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:32.603 [2024-11-27 11:12:01.382131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:32.603 [2024-11-27 11:12:01.382141] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:32.603 [2024-11-27 11:12:01.382151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:32.603 [2024-11-27 11:12:01.382213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:32.603 [2024-11-27 11:12:01.382221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:32.603 [2024-11-27 11:12:01.382227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:32.603 [2024-11-27 11:12:01.382234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:32.603 [2024-11-27 11:12:01.382240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:32.603 [2024-11-27 11:12:01.382249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:32.603 [2024-11-27 11:12:01.382255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:32.603 [2024-11-27 11:12:01.382262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:32.603 [2024-11-27 11:12:01.382268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:32.603 [2024-11-27 11:12:01.382299] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:32.603 [2024-11-27 11:12:01.382305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:32.603 [2024-11-27 11:12:01.382319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:32.603 [2024-11-27 11:12:01.382326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:32.603 [2024-11-27 11:12:01.382332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:32.603 [2024-11-27 11:12:01.382339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.603 [2024-11-27 11:12:01.382345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:32.603 [2024-11-27 11:12:01.382354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:16:32.603 [2024-11-27 11:12:01.382359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.603 [2024-11-27 11:12:01.382431] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:32.603 [2024-11-27 11:12:01.382439] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:35.132 [2024-11-27 11:12:03.655708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.655770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:35.132 [2024-11-27 11:12:03.655788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2273.264 ms 00:16:35.132 [2024-11-27 11:12:03.655796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.675528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.675580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.132 [2024-11-27 11:12:03.675598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.610 ms 00:16:35.132 [2024-11-27 11:12:03.675607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.675761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.675773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:35.132 [2024-11-27 11:12:03.675784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:35.132 [2024-11-27 11:12:03.675792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.688005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.688055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.132 [2024-11-27 11:12:03.688087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 00:16:35.132 [2024-11-27 11:12:03.688098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.688151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.688163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.132 [2024-11-27 11:12:03.688191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:35.132 [2024-11-27 11:12:03.688202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.688684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.688706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.132 [2024-11-27 11:12:03.688722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:16:35.132 [2024-11-27 11:12:03.688735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.688980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.688997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.132 [2024-11-27 11:12:03.689013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:16:35.132 [2024-11-27 11:12:03.689026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.695967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.696183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.132 [2024-11-27 11:12:03.696203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:16:35.132 [2024-11-27 11:12:03.696211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.132 [2024-11-27 11:12:03.705645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:35.132 [2024-11-27 11:12:03.722923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.132 [2024-11-27 11:12:03.723080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:35.132 [2024-11-27 11:12:03.723097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.618 ms 00:16:35.133 [2024-11-27 11:12:03.723106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.758531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.758658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:35.133 [2024-11-27 11:12:03.758676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.386 ms 00:16:35.133 [2024-11-27 11:12:03.758689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.758920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.758935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:35.133 [2024-11-27 11:12:03.758947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:16:35.133 [2024-11-27 11:12:03.758957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.762093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.762130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:35.133 [2024-11-27 11:12:03.762140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:16:35.133 [2024-11-27 11:12:03.762153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.764867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.764923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:35.133 [2024-11-27 11:12:03.764935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:16:35.133 [2024-11-27 11:12:03.764944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.765250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.765262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:35.133 [2024-11-27 11:12:03.765271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:35.133 [2024-11-27 11:12:03.765282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.787778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.787816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:35.133 [2024-11-27 11:12:03.787829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.467 ms 00:16:35.133 [2024-11-27 11:12:03.787839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.791926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.791962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:35.133 [2024-11-27 11:12:03.791992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.992 ms 00:16:35.133 [2024-11-27 11:12:03.792003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.795250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.795375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:35.133 [2024-11-27 11:12:03.795390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:16:35.133 [2024-11-27 11:12:03.795399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.798987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.799025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:35.133 [2024-11-27 11:12:03.799035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:16:35.133 [2024-11-27 11:12:03.799048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.799092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.799103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:35.133 [2024-11-27 11:12:03.799113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:35.133 [2024-11-27 11:12:03.799123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.799200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.133 [2024-11-27 11:12:03.799211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:35.133 [2024-11-27 11:12:03.799220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:35.133 [2024-11-27 11:12:03.799233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.133 [2024-11-27 11:12:03.800258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2429.527 ms, result 0 00:16:35.133 { 00:16:35.133 "name": "ftl0", 00:16:35.133 "uuid": "2b2ffe3c-f04d-449c-b6f6-9f0c7ea4076c" 00:16:35.133 } 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:35.133 11:12:03 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:35.391 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:35.391 [ 00:16:35.391 { 00:16:35.391 "name": "ftl0", 00:16:35.391 "aliases": [ 00:16:35.391 "2b2ffe3c-f04d-449c-b6f6-9f0c7ea4076c" 00:16:35.391 ], 00:16:35.391 "product_name": "FTL disk", 00:16:35.391 "block_size": 4096, 00:16:35.391 "num_blocks": 20971520, 00:16:35.391 "uuid": "2b2ffe3c-f04d-449c-b6f6-9f0c7ea4076c", 00:16:35.391 "assigned_rate_limits": { 00:16:35.391 "rw_ios_per_sec": 0, 00:16:35.391 "rw_mbytes_per_sec": 0, 00:16:35.391 "r_mbytes_per_sec": 0, 00:16:35.391 "w_mbytes_per_sec": 0 00:16:35.391 }, 00:16:35.391 "claimed": false, 00:16:35.391 "zoned": false, 00:16:35.391 "supported_io_types": { 00:16:35.391 "read": true, 00:16:35.391 "write": true, 00:16:35.391 "unmap": true, 00:16:35.391 "flush": true, 00:16:35.391 "reset": false, 00:16:35.391 "nvme_admin": false, 00:16:35.391 "nvme_io": false, 00:16:35.391 "nvme_io_md": false, 00:16:35.391 "write_zeroes": true, 00:16:35.391 "zcopy": false, 00:16:35.391 "get_zone_info": false, 00:16:35.391 "zone_management": false, 00:16:35.391 "zone_append": false, 00:16:35.391 "compare": false, 00:16:35.391 "compare_and_write": false, 00:16:35.391 "abort": false, 00:16:35.391 "seek_hole": false, 00:16:35.391 "seek_data": false, 00:16:35.391 "copy": false, 00:16:35.391 "nvme_iov_md": false 00:16:35.391 }, 00:16:35.391 "driver_specific": { 00:16:35.391 "ftl": { 00:16:35.391 "base_bdev": "e5accb64-9824-4362-a531-b39683f4c685", 00:16:35.391 "cache": "nvc0n1p0" 00:16:35.391 } 00:16:35.391 } 00:16:35.391 } 00:16:35.391 ] 00:16:35.391 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:16:35.391 11:12:04 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:35.391 11:12:04 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:35.650 11:12:04 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:35.650 11:12:04 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:35.912 [2024-11-27 11:12:04.614746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.614905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.912 [2024-11-27 11:12:04.614929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.912 [2024-11-27 11:12:04.614939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.614976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.912 [2024-11-27 11:12:04.615546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.615574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.912 [2024-11-27 11:12:04.615584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:16:35.912 [2024-11-27 11:12:04.615593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.616100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.616137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.912 [2024-11-27 11:12:04.616145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:16:35.912 [2024-11-27 11:12:04.616155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.619394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.619429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.912 [2024-11-27 11:12:04.619438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:16:35.912 [2024-11-27 11:12:04.619447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.625688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.625813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:35.912 [2024-11-27 11:12:04.625828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:16:35.912 [2024-11-27 11:12:04.625838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.627867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.627917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.912 [2024-11-27 11:12:04.627927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:16:35.912 [2024-11-27 11:12:04.627937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.632483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.632604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.912 [2024-11-27 11:12:04.632619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:16:35.912 [2024-11-27 11:12:04.632629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.632772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.632786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.912 [2024-11-27 11:12:04.632795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:35.912 [2024-11-27 11:12:04.632804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.634472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.634506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:35.912 [2024-11-27 11:12:04.634516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:16:35.912 [2024-11-27 11:12:04.634525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.636152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.636188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:35.912 [2024-11-27 11:12:04.636197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:16:35.912 [2024-11-27 11:12:04.636205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.637503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.637540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.912 [2024-11-27 11:12:04.637549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:16:35.912 [2024-11-27 11:12:04.637557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.639090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.912 [2024-11-27 11:12:04.639195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.912 [2024-11-27 11:12:04.639208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:16:35.912 [2024-11-27 11:12:04.639217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.912 [2024-11-27 11:12:04.639251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.912 [2024-11-27 11:12:04.639267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.912 [2024-11-27 11:12:04.639474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.639994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.640008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.913 [2024-11-27 11:12:04.640016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.914 [2024-11-27 11:12:04.640169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.914 [2024-11-27 11:12:04.640177] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b2ffe3c-f04d-449c-b6f6-9f0c7ea4076c 00:16:35.914 [2024-11-27 11:12:04.640186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.914 [2024-11-27 11:12:04.640193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.914 [2024-11-27 11:12:04.640204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.914 [2024-11-27 11:12:04.640211] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.914 [2024-11-27 11:12:04.640220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.914 [2024-11-27 11:12:04.640227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.914 [2024-11-27 11:12:04.640238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.914 [2024-11-27 11:12:04.640244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.914 [2024-11-27 11:12:04.640252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.914 [2024-11-27 11:12:04.640260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.914 [2024-11-27 11:12:04.640270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.914 [2024-11-27 11:12:04.640278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:16:35.914 [2024-11-27 11:12:04.640347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.642210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.914 [2024-11-27 11:12:04.642232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.914 [2024-11-27 11:12:04.642240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:16:35.914 [2024-11-27 11:12:04.642250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.642346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.914 [2024-11-27 11:12:04.642367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.914 [2024-11-27 11:12:04.642375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:35.914 [2024-11-27 11:12:04.642384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.648947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.649059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.914 [2024-11-27 11:12:04.649109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.649134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.649465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.649558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.914 [2024-11-27 11:12:04.649623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.649650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.649783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.649853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.914 [2024-11-27 11:12:04.649926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.649954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.650016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.650065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.914 [2024-11-27 11:12:04.650123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.650150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.662203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.662339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.914 [2024-11-27 11:12:04.662389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.662414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.672115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.672242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.914 [2024-11-27 11:12:04.672292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.672318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.672433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.672467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.914 [2024-11-27 11:12:04.672548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.672572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.672655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.672682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.914 [2024-11-27 11:12:04.672703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.672749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.672860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.672968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.914 [2024-11-27 11:12:04.672992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.673052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.673154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.914 [2024-11-27 11:12:04.673184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.914 [2024-11-27 11:12:04.673206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.914 [2024-11-27 11:12:04.673227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.914 [2024-11-27 11:12:04.673294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.915 [2024-11-27 11:12:04.673320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.915 [2024-11-27 11:12:04.673340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.915 [2024-11-27 11:12:04.673363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.915 [2024-11-27 11:12:04.673509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.915 [2024-11-27 11:12:04.673586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.915 [2024-11-27 11:12:04.673632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.915 [2024-11-27 11:12:04.673657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.915 [2024-11-27 11:12:04.673875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.101 ms, result 0 00:16:35.915 true 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84308 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84308 ']' 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84308 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84308 00:16:35.915 killing process with pid 84308 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84308' 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84308 00:16:35.915 11:12:04 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84308 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:41.185 11:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:41.185 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:41.185 fio-3.35 00:16:41.185 Starting 1 thread 00:16:46.547 00:16:46.547 test: (groupid=0, jobs=1): err= 0: pid=84469: Wed Nov 27 11:12:14 2024 00:16:46.547 read: IOPS=840, BW=55.8MiB/s (58.5MB/s)(255MiB/4559msec) 00:16:46.547 slat (nsec): min=4126, max=41729, avg=6786.75, stdev=3347.09 00:16:46.547 clat (usec): min=258, max=1751, avg=537.61, stdev=227.27 00:16:46.547 lat (usec): min=263, max=1769, avg=544.40, stdev=228.70 00:16:46.547 clat percentiles (usec): 00:16:46.547 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:16:46.547 | 30.00th=[ 355], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 545], 00:16:46.547 | 70.00th=[ 594], 80.00th=[ 816], 90.00th=[ 906], 95.00th=[ 938], 00:16:46.547 | 99.00th=[ 1123], 99.50th=[ 1188], 99.90th=[ 1385], 99.95th=[ 1516], 00:16:46.547 | 99.99th=[ 1745] 00:16:46.547 write: IOPS=846, BW=56.2MiB/s (59.0MB/s)(256MiB/4554msec); 0 zone resets 00:16:46.547 slat (usec): min=14, max=104, avg=25.08, stdev= 7.26 00:16:46.547 clat (usec): min=272, max=1861, avg=602.48, stdev=252.86 00:16:46.547 lat (usec): min=296, max=1887, avg=627.56, stdev=254.66 00:16:46.547 clat percentiles (usec): 00:16:46.547 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 334], 00:16:46.547 | 30.00th=[ 383], 40.00th=[ 510], 50.00th=[ 570], 60.00th=[ 619], 00:16:46.547 | 70.00th=[ 693], 80.00th=[ 873], 90.00th=[ 988], 95.00th=[ 1029], 00:16:46.547 | 99.00th=[ 1237], 99.50th=[ 1319], 99.90th=[ 1729], 99.95th=[ 1745], 00:16:46.547 | 99.99th=[ 1860] 00:16:46.547 bw ( KiB/s): min=36176, max=88536, per=100.00%, avg=57618.67, stdev=17758.81, samples=9 00:16:46.547 iops : min= 532, max= 1302, avg=847.33, stdev=261.16, samples=9 00:16:46.547 lat (usec) : 500=46.04%, 750=31.19%, 1000=17.09% 00:16:46.547 lat (msec) : 2=5.68% 00:16:46.548 cpu : usr=98.99%, sys=0.15%, ctx=15, majf=0, minf=1181 00:16:46.548 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.548 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.548 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.548 00:16:46.548 Run status group 0 (all jobs): 00:16:46.548 READ: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=255MiB (267MB), run=4559-4559msec 00:16:46.548 WRITE: bw=56.2MiB/s (59.0MB/s), 56.2MiB/s-56.2MiB/s (59.0MB/s-59.0MB/s), io=256MiB (269MB), run=4554-4554msec 00:16:46.548 ----------------------------------------------------- 00:16:46.548 Suppressions used: 00:16:46.548 count bytes template 00:16:46.548 1 5 /usr/src/fio/parse.c 00:16:46.548 1 8 libtcmalloc_minimal.so 00:16:46.548 1 904 libcrypto.so 00:16:46.548 ----------------------------------------------------- 00:16:46.548 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:46.548 11:12:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:46.808 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:46.808 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:46.808 fio-3.35 00:16:46.808 Starting 2 threads 00:17:13.370 00:17:13.370 first_half: (groupid=0, jobs=1): err= 0: pid=84566: Wed Nov 27 11:12:39 2024 00:17:13.370 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(256MiB/22891msec) 00:17:13.370 slat (nsec): min=2883, max=27555, avg=4977.18, stdev=1057.39 00:17:13.370 clat (usec): min=871, max=419182, avg=37402.80, stdev=27134.32 00:17:13.370 lat (usec): min=874, max=419188, avg=37407.78, stdev=27134.44 00:17:13.370 clat percentiles (msec): 00:17:13.370 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 30], 00:17:13.370 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:17:13.371 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 69], 00:17:13.371 | 99.00th=[ 157], 99.50th=[ 213], 99.90th=[ 321], 99.95th=[ 363], 00:17:13.371 | 99.99th=[ 409] 00:17:13.371 write: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(256MiB/22859msec); 0 zone resets 00:17:13.371 slat (usec): min=3, max=457, avg= 6.10, stdev= 3.35 00:17:13.371 clat (usec): min=363, max=60574, avg=7315.31, stdev=8227.86 00:17:13.371 lat (usec): min=369, max=60579, avg=7321.41, stdev=8227.98 00:17:13.371 clat percentiles (usec): 00:17:13.371 | 1.00th=[ 742], 5.00th=[ 1004], 10.00th=[ 1303], 20.00th=[ 2474], 00:17:13.371 | 30.00th=[ 3163], 40.00th=[ 3949], 50.00th=[ 4752], 60.00th=[ 5538], 00:17:13.371 | 70.00th=[ 6652], 80.00th=[10945], 90.00th=[14877], 95.00th=[23987], 00:17:13.371 | 99.00th=[49021], 99.50th=[54264], 99.90th=[57410], 99.95th=[57934], 00:17:13.371 | 99.99th=[59507] 00:17:13.371 bw ( KiB/s): min= 1520, max=49624, per=98.72%, avg=22642.78, stdev=14490.05, samples=23 00:17:13.371 iops : min= 380, max=12406, avg=5660.70, stdev=3622.51, samples=23 00:17:13.371 lat (usec) : 500=0.03%, 750=0.51%, 1000=1.93% 00:17:13.371 lat (msec) : 2=5.35%, 4=12.58%, 10=19.33%, 20=8.50%, 50=47.95% 00:17:13.371 lat (msec) : 100=2.01%, 250=1.63%, 500=0.19% 00:17:13.371 cpu : usr=99.28%, sys=0.16%, ctx=36, majf=0, minf=5587 00:17:13.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:13.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.371 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.371 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.371 second_half: (groupid=0, jobs=1): err= 0: pid=84567: Wed Nov 27 11:12:39 2024 00:17:13.371 read: IOPS=2891, BW=11.3MiB/s (11.8MB/s)(256MiB/22650msec) 00:17:13.371 slat (nsec): min=3019, max=37796, avg=4261.97, stdev=1116.63 00:17:13.371 clat (msec): min=9, max=405, avg=37.60, stdev=22.07 00:17:13.371 lat (msec): min=9, max=405, avg=37.61, stdev=22.07 00:17:13.371 clat percentiles (msec): 00:17:13.371 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:17:13.371 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 34], 00:17:13.371 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 45], 95.00th=[ 66], 00:17:13.371 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 255], 99.95th=[ 275], 00:17:13.371 | 99.99th=[ 296] 00:17:13.371 write: IOPS=2910, BW=11.4MiB/s (11.9MB/s)(256MiB/22519msec); 0 zone resets 00:17:13.371 slat (usec): min=3, max=2656, avg= 5.80, stdev=17.04 00:17:13.371 clat (usec): min=374, max=39040, avg=6642.70, stdev=5035.60 00:17:13.371 lat (usec): min=381, max=39049, avg=6648.50, stdev=5036.39 00:17:13.371 clat percentiles (usec): 00:17:13.371 | 1.00th=[ 766], 5.00th=[ 1532], 10.00th=[ 2409], 20.00th=[ 3195], 00:17:13.371 | 30.00th=[ 3916], 40.00th=[ 4555], 50.00th=[ 5145], 60.00th=[ 5538], 00:17:13.371 | 70.00th=[ 6259], 80.00th=[ 9896], 90.00th=[13960], 95.00th=[17171], 00:17:13.371 | 99.00th=[24773], 99.50th=[28443], 99.90th=[31589], 99.95th=[32375], 00:17:13.371 | 99.99th=[36439] 00:17:13.371 bw ( KiB/s): min= 3280, max=46328, per=99.34%, avg=22785.09, stdev=15023.67, samples=23 00:17:13.371 iops : min= 820, max=11582, avg=5696.26, stdev=3755.93, samples=23 00:17:13.371 lat (usec) : 500=0.03%, 750=0.40%, 1000=0.93% 00:17:13.371 lat (msec) : 2=2.14%, 4=12.31%, 10=24.22%, 20=8.69%, 50=47.29% 00:17:13.371 lat (msec) : 100=2.37%, 250=1.55%, 500=0.06% 00:17:13.371 cpu : usr=99.24%, sys=0.13%, ctx=29, majf=0, minf=5555 00:17:13.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:13.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.371 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:13.371 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:13.371 00:17:13.371 Run status group 0 (all jobs): 00:17:13.371 READ: bw=22.3MiB/s (23.4MB/s), 11.2MiB/s-11.3MiB/s (11.7MB/s-11.8MB/s), io=512MiB (536MB), run=22650-22891msec 00:17:13.371 WRITE: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.4MiB/s (11.7MB/s-11.9MB/s), io=512MiB (537MB), run=22519-22859msec 00:17:13.371 ----------------------------------------------------- 00:17:13.371 Suppressions used: 00:17:13.371 count bytes template 00:17:13.371 2 10 /usr/src/fio/parse.c 00:17:13.371 3 288 /usr/src/fio/iolog.c 00:17:13.371 1 8 libtcmalloc_minimal.so 00:17:13.371 1 904 libcrypto.so 00:17:13.371 ----------------------------------------------------- 00:17:13.371 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:13.371 11:12:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:13.371 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:13.371 fio-3.35 00:17:13.371 Starting 1 thread 00:17:31.473 00:17:31.473 test: (groupid=0, jobs=1): err= 0: pid=84863: Wed Nov 27 11:12:57 2024 00:17:31.473 read: IOPS=6537, BW=25.5MiB/s (26.8MB/s)(255MiB/9974msec) 00:17:31.473 slat (usec): min=2, max=116, avg= 4.99, stdev= 2.54 00:17:31.473 clat (usec): min=454, max=50036, avg=19570.17, stdev=4433.97 00:17:31.473 lat (usec): min=460, max=50045, avg=19575.16, stdev=4435.38 00:17:31.473 clat percentiles (usec): 00:17:31.473 | 1.00th=[14222], 5.00th=[14484], 10.00th=[14746], 20.00th=[15008], 00:17:31.473 | 30.00th=[15533], 40.00th=[16909], 50.00th=[19268], 60.00th=[20841], 00:17:31.473 | 70.00th=[22414], 80.00th=[23725], 90.00th=[25560], 95.00th=[26608], 00:17:31.473 | 99.00th=[30016], 99.50th=[32113], 99.90th=[38011], 99.95th=[43779], 00:17:31.473 | 99.99th=[49021] 00:17:31.473 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(256MiB/5616msec); 0 zone resets 00:17:31.473 slat (usec): min=3, max=2012, avg= 6.13, stdev=13.76 00:17:31.473 clat (usec): min=439, max=59343, avg=10903.98, stdev=12385.59 00:17:31.473 lat (usec): min=444, max=59350, avg=10910.11, stdev=12385.66 00:17:31.473 clat percentiles (usec): 00:17:31.473 | 1.00th=[ 685], 5.00th=[ 865], 10.00th=[ 1012], 20.00th=[ 1237], 00:17:31.473 | 30.00th=[ 1532], 40.00th=[ 2638], 50.00th=[ 7242], 60.00th=[ 9765], 00:17:31.473 | 70.00th=[12780], 80.00th=[15401], 90.00th=[31327], 95.00th=[37487], 00:17:31.473 | 99.00th=[51119], 99.50th=[54789], 99.90th=[57410], 99.95th=[57934], 00:17:31.473 | 99.99th=[58983] 00:17:31.473 bw ( KiB/s): min=11272, max=57072, per=93.60%, avg=43690.00, stdev=13419.39, samples=12 00:17:31.473 iops : min= 2818, max=14268, avg=10922.67, stdev=3354.98, samples=12 00:17:31.473 lat (usec) : 500=0.01%, 750=1.10%, 1000=3.73% 00:17:31.473 lat (msec) : 2=13.30%, 4=2.71%, 10=9.72%, 20=38.47%, 50=30.32% 00:17:31.473 lat (msec) : 100=0.65% 00:17:31.473 cpu : usr=98.17%, sys=0.48%, ctx=55, majf=0, minf=5577 00:17:31.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:31.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.473 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:31.473 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:31.473 00:17:31.473 Run status group 0 (all jobs): 00:17:31.473 READ: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=255MiB (267MB), run=9974-9974msec 00:17:31.473 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=256MiB (268MB), run=5616-5616msec 00:17:31.473 ----------------------------------------------------- 00:17:31.473 Suppressions used: 00:17:31.473 count bytes template 00:17:31.473 1 5 /usr/src/fio/parse.c 00:17:31.473 2 192 /usr/src/fio/iolog.c 00:17:31.473 1 8 libtcmalloc_minimal.so 00:17:31.473 1 904 libcrypto.so 00:17:31.473 ----------------------------------------------------- 00:17:31.473 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.473 Remove shared memory files 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69828 /dev/shm/spdk_tgt_trace.pid83249 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:31.473 ************************************ 00:17:31.473 END TEST ftl_fio_basic 00:17:31.473 ************************************ 00:17:31.473 00:17:31.473 real 1m0.633s 00:17:31.473 user 2m11.746s 00:17:31.473 sys 0m2.928s 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.473 11:12:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:31.473 11:12:58 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:31.473 11:12:58 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:31.473 11:12:58 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.473 11:12:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:31.473 ************************************ 00:17:31.473 START TEST ftl_bdevperf 00:17:31.473 ************************************ 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:31.473 * Looking for test storage... 00:17:31.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.473 --rc genhtml_branch_coverage=1 00:17:31.473 --rc genhtml_function_coverage=1 00:17:31.473 --rc genhtml_legend=1 00:17:31.473 --rc geninfo_all_blocks=1 00:17:31.473 --rc geninfo_unexecuted_blocks=1 00:17:31.473 00:17:31.473 ' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.473 --rc genhtml_branch_coverage=1 00:17:31.473 --rc genhtml_function_coverage=1 00:17:31.473 --rc genhtml_legend=1 00:17:31.473 --rc geninfo_all_blocks=1 00:17:31.473 --rc geninfo_unexecuted_blocks=1 00:17:31.473 00:17:31.473 ' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.473 --rc genhtml_branch_coverage=1 00:17:31.473 --rc genhtml_function_coverage=1 00:17:31.473 --rc genhtml_legend=1 00:17:31.473 --rc geninfo_all_blocks=1 00:17:31.473 --rc geninfo_unexecuted_blocks=1 00:17:31.473 00:17:31.473 ' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:31.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.473 --rc genhtml_branch_coverage=1 00:17:31.473 --rc genhtml_function_coverage=1 00:17:31.473 --rc genhtml_legend=1 00:17:31.473 --rc geninfo_all_blocks=1 00:17:31.473 --rc geninfo_unexecuted_blocks=1 00:17:31.473 00:17:31.473 ' 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:31.473 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85123 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85123 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85123 ']' 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:31.474 11:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:31.474 [2024-11-27 11:12:58.582105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:31.474 [2024-11-27 11:12:58.582623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85123 ] 00:17:31.474 [2024-11-27 11:12:58.736026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.474 [2024-11-27 11:12:58.789815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:31.474 { 00:17:31.474 "name": "nvme0n1", 00:17:31.474 "aliases": [ 00:17:31.474 "588275d3-0cb4-4ba3-b4c2-a34a7aa5e3b1" 00:17:31.474 ], 00:17:31.474 "product_name": "NVMe disk", 00:17:31.474 "block_size": 4096, 00:17:31.474 "num_blocks": 1310720, 00:17:31.474 "uuid": "588275d3-0cb4-4ba3-b4c2-a34a7aa5e3b1", 00:17:31.474 "numa_id": -1, 00:17:31.474 "assigned_rate_limits": { 00:17:31.474 "rw_ios_per_sec": 0, 00:17:31.474 "rw_mbytes_per_sec": 0, 00:17:31.474 "r_mbytes_per_sec": 0, 00:17:31.474 "w_mbytes_per_sec": 0 00:17:31.474 }, 00:17:31.474 "claimed": true, 00:17:31.474 "claim_type": "read_many_write_one", 00:17:31.474 "zoned": false, 00:17:31.474 "supported_io_types": { 00:17:31.474 "read": true, 00:17:31.474 "write": true, 00:17:31.474 "unmap": true, 00:17:31.474 "flush": true, 00:17:31.474 "reset": true, 00:17:31.474 "nvme_admin": true, 00:17:31.474 "nvme_io": true, 00:17:31.474 "nvme_io_md": false, 00:17:31.474 "write_zeroes": true, 00:17:31.474 "zcopy": false, 00:17:31.474 "get_zone_info": false, 00:17:31.474 "zone_management": false, 00:17:31.474 "zone_append": false, 00:17:31.474 "compare": true, 00:17:31.474 "compare_and_write": false, 00:17:31.474 "abort": true, 00:17:31.474 "seek_hole": false, 00:17:31.474 "seek_data": false, 00:17:31.474 "copy": true, 00:17:31.474 "nvme_iov_md": false 00:17:31.474 }, 00:17:31.474 "driver_specific": { 00:17:31.474 "nvme": [ 00:17:31.474 { 00:17:31.474 "pci_address": "0000:00:11.0", 00:17:31.474 "trid": { 00:17:31.474 "trtype": "PCIe", 00:17:31.474 "traddr": "0000:00:11.0" 00:17:31.474 }, 00:17:31.474 "ctrlr_data": { 00:17:31.474 "cntlid": 0, 00:17:31.474 "vendor_id": "0x1b36", 00:17:31.474 "model_number": "QEMU NVMe Ctrl", 00:17:31.474 "serial_number": "12341", 00:17:31.474 "firmware_revision": "8.0.0", 00:17:31.474 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:31.474 "oacs": { 00:17:31.474 "security": 0, 00:17:31.474 "format": 1, 00:17:31.474 "firmware": 0, 00:17:31.474 "ns_manage": 1 00:17:31.474 }, 00:17:31.474 "multi_ctrlr": false, 00:17:31.474 "ana_reporting": false 00:17:31.474 }, 00:17:31.474 "vs": { 00:17:31.474 "nvme_version": "1.4" 00:17:31.474 }, 00:17:31.474 "ns_data": { 00:17:31.474 "id": 1, 00:17:31.474 "can_share": false 00:17:31.474 } 00:17:31.474 } 00:17:31.474 ], 00:17:31.474 "mp_policy": "active_passive" 00:17:31.474 } 00:17:31.474 } 00:17:31.474 ]' 00:17:31.474 11:12:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=95719df7-e8f8-4428-8680-063cd4f05e68 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:31.474 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95719df7-e8f8-4428-8680-063cd4f05e68 00:17:31.734 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:31.992 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=bc92c43b-2067-4087-b8be-582997688a1f 00:17:31.992 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bc92c43b-2067-4087-b8be-582997688a1f 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:32.251 11:13:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.251 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.251 { 00:17:32.251 "name": "a12a7aa2-adad-44cf-8e6f-c6886b28c694", 00:17:32.251 "aliases": [ 00:17:32.251 "lvs/nvme0n1p0" 00:17:32.251 ], 00:17:32.251 "product_name": "Logical Volume", 00:17:32.251 "block_size": 4096, 00:17:32.251 "num_blocks": 26476544, 00:17:32.251 "uuid": "a12a7aa2-adad-44cf-8e6f-c6886b28c694", 00:17:32.251 "assigned_rate_limits": { 00:17:32.251 "rw_ios_per_sec": 0, 00:17:32.251 "rw_mbytes_per_sec": 0, 00:17:32.251 "r_mbytes_per_sec": 0, 00:17:32.251 "w_mbytes_per_sec": 0 00:17:32.251 }, 00:17:32.251 "claimed": false, 00:17:32.251 "zoned": false, 00:17:32.251 "supported_io_types": { 00:17:32.251 "read": true, 00:17:32.251 "write": true, 00:17:32.251 "unmap": true, 00:17:32.251 "flush": false, 00:17:32.251 "reset": true, 00:17:32.251 "nvme_admin": false, 00:17:32.251 "nvme_io": false, 00:17:32.251 "nvme_io_md": false, 00:17:32.251 "write_zeroes": true, 00:17:32.251 "zcopy": false, 00:17:32.251 "get_zone_info": false, 00:17:32.251 "zone_management": false, 00:17:32.251 "zone_append": false, 00:17:32.251 "compare": false, 00:17:32.251 "compare_and_write": false, 00:17:32.251 "abort": false, 00:17:32.251 "seek_hole": true, 00:17:32.251 "seek_data": true, 00:17:32.251 "copy": false, 00:17:32.251 "nvme_iov_md": false 00:17:32.251 }, 00:17:32.251 "driver_specific": { 00:17:32.251 "lvol": { 00:17:32.251 "lvol_store_uuid": "bc92c43b-2067-4087-b8be-582997688a1f", 00:17:32.251 "base_bdev": "nvme0n1", 00:17:32.251 "thin_provision": true, 00:17:32.251 "num_allocated_clusters": 0, 00:17:32.251 "snapshot": false, 00:17:32.251 "clone": false, 00:17:32.251 "esnap_clone": false 00:17:32.251 } 00:17:32.251 } 00:17:32.251 } 00:17:32.251 ]' 00:17:32.251 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.251 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.251 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:32.511 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:32.511 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:32.511 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:32.511 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:32.511 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:32.511 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.773 { 00:17:32.773 "name": "a12a7aa2-adad-44cf-8e6f-c6886b28c694", 00:17:32.773 "aliases": [ 00:17:32.773 "lvs/nvme0n1p0" 00:17:32.773 ], 00:17:32.773 "product_name": "Logical Volume", 00:17:32.773 "block_size": 4096, 00:17:32.773 "num_blocks": 26476544, 00:17:32.773 "uuid": "a12a7aa2-adad-44cf-8e6f-c6886b28c694", 00:17:32.773 "assigned_rate_limits": { 00:17:32.773 "rw_ios_per_sec": 0, 00:17:32.773 "rw_mbytes_per_sec": 0, 00:17:32.773 "r_mbytes_per_sec": 0, 00:17:32.773 "w_mbytes_per_sec": 0 00:17:32.773 }, 00:17:32.773 "claimed": false, 00:17:32.773 "zoned": false, 00:17:32.773 "supported_io_types": { 00:17:32.773 "read": true, 00:17:32.773 "write": true, 00:17:32.773 "unmap": true, 00:17:32.773 "flush": false, 00:17:32.773 "reset": true, 00:17:32.773 "nvme_admin": false, 00:17:32.773 "nvme_io": false, 00:17:32.773 "nvme_io_md": false, 00:17:32.773 "write_zeroes": true, 00:17:32.773 "zcopy": false, 00:17:32.773 "get_zone_info": false, 00:17:32.773 "zone_management": false, 00:17:32.773 "zone_append": false, 00:17:32.773 "compare": false, 00:17:32.773 "compare_and_write": false, 00:17:32.773 "abort": false, 00:17:32.773 "seek_hole": true, 00:17:32.773 "seek_data": true, 00:17:32.773 "copy": false, 00:17:32.773 "nvme_iov_md": false 00:17:32.773 }, 00:17:32.773 "driver_specific": { 00:17:32.773 "lvol": { 00:17:32.773 "lvol_store_uuid": "bc92c43b-2067-4087-b8be-582997688a1f", 00:17:32.773 "base_bdev": "nvme0n1", 00:17:32.773 "thin_provision": true, 00:17:32.773 "num_allocated_clusters": 0, 00:17:32.773 "snapshot": false, 00:17:32.773 "clone": false, 00:17:32.773 "esnap_clone": false 00:17:32.773 } 00:17:32.773 } 00:17:32.773 } 00:17:32.773 ]' 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.773 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:33.034 11:13:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a12a7aa2-adad-44cf-8e6f-c6886b28c694 00:17:33.604 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:33.604 { 00:17:33.604 "name": "a12a7aa2-adad-44cf-8e6f-c6886b28c694", 00:17:33.604 "aliases": [ 00:17:33.604 "lvs/nvme0n1p0" 00:17:33.604 ], 00:17:33.604 "product_name": "Logical Volume", 00:17:33.604 "block_size": 4096, 00:17:33.604 "num_blocks": 26476544, 00:17:33.604 "uuid": "a12a7aa2-adad-44cf-8e6f-c6886b28c694", 00:17:33.604 "assigned_rate_limits": { 00:17:33.604 "rw_ios_per_sec": 0, 00:17:33.604 "rw_mbytes_per_sec": 0, 00:17:33.604 "r_mbytes_per_sec": 0, 00:17:33.604 "w_mbytes_per_sec": 0 00:17:33.604 }, 00:17:33.604 "claimed": false, 00:17:33.604 "zoned": false, 00:17:33.604 "supported_io_types": { 00:17:33.604 "read": true, 00:17:33.605 "write": true, 00:17:33.605 "unmap": true, 00:17:33.605 "flush": false, 00:17:33.605 "reset": true, 00:17:33.605 "nvme_admin": false, 00:17:33.605 "nvme_io": false, 00:17:33.605 "nvme_io_md": false, 00:17:33.605 "write_zeroes": true, 00:17:33.605 "zcopy": false, 00:17:33.605 "get_zone_info": false, 00:17:33.605 "zone_management": false, 00:17:33.605 "zone_append": false, 00:17:33.605 "compare": false, 00:17:33.605 "compare_and_write": false, 00:17:33.605 "abort": false, 00:17:33.605 "seek_hole": true, 00:17:33.605 "seek_data": true, 00:17:33.605 "copy": false, 00:17:33.605 "nvme_iov_md": false 00:17:33.605 }, 00:17:33.605 "driver_specific": { 00:17:33.605 "lvol": { 00:17:33.605 "lvol_store_uuid": "bc92c43b-2067-4087-b8be-582997688a1f", 00:17:33.605 "base_bdev": "nvme0n1", 00:17:33.605 "thin_provision": true, 00:17:33.605 "num_allocated_clusters": 0, 00:17:33.605 "snapshot": false, 00:17:33.605 "clone": false, 00:17:33.605 "esnap_clone": false 00:17:33.605 } 00:17:33.605 } 00:17:33.605 } 00:17:33.605 ]' 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:33.605 11:13:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a12a7aa2-adad-44cf-8e6f-c6886b28c694 -c nvc0n1p0 --l2p_dram_limit 20 00:17:33.605 [2024-11-27 11:13:02.455807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.455875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:33.605 [2024-11-27 11:13:02.455911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:33.605 [2024-11-27 11:13:02.455924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.455993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.456007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.605 [2024-11-27 11:13:02.456022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:33.605 [2024-11-27 11:13:02.456030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.456051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:33.605 [2024-11-27 11:13:02.456371] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:33.605 [2024-11-27 11:13:02.456390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.456399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.605 [2024-11-27 11:13:02.456410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:17:33.605 [2024-11-27 11:13:02.456418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.456453] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8e1bb8e8-a915-4705-bdec-8344fa66f267 00:17:33.605 [2024-11-27 11:13:02.458238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.458411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:33.605 [2024-11-27 11:13:02.458430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:33.605 [2024-11-27 11:13:02.458441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.466928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.466965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.605 [2024-11-27 11:13:02.466976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.387 ms 00:17:33.605 [2024-11-27 11:13:02.466995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.467076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.467087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.605 [2024-11-27 11:13:02.467100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:33.605 [2024-11-27 11:13:02.467110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.467156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.467170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:33.605 [2024-11-27 11:13:02.467179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:33.605 [2024-11-27 11:13:02.467189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.467210] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.605 [2024-11-27 11:13:02.469540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.469699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.605 [2024-11-27 11:13:02.469720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:17:33.605 [2024-11-27 11:13:02.469729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.469773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.469787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:33.605 [2024-11-27 11:13:02.469801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:33.605 [2024-11-27 11:13:02.469810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.469842] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:33.605 [2024-11-27 11:13:02.470011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:33.605 [2024-11-27 11:13:02.470028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:33.605 [2024-11-27 11:13:02.470040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:33.605 [2024-11-27 11:13:02.470053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470070] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:33.605 [2024-11-27 11:13:02.470089] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:33.605 [2024-11-27 11:13:02.470099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:33.605 [2024-11-27 11:13:02.470108] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:33.605 [2024-11-27 11:13:02.470119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.470126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:33.605 [2024-11-27 11:13:02.470138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:33.605 [2024-11-27 11:13:02.470147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.470233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.605 [2024-11-27 11:13:02.470241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:33.605 [2024-11-27 11:13:02.470251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:33.605 [2024-11-27 11:13:02.470258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.605 [2024-11-27 11:13:02.470351] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:33.605 [2024-11-27 11:13:02.470368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:33.605 [2024-11-27 11:13:02.470379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:33.605 [2024-11-27 11:13:02.470407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:33.605 [2024-11-27 11:13:02.470433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.605 [2024-11-27 11:13:02.470450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:33.605 [2024-11-27 11:13:02.470456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:33.605 [2024-11-27 11:13:02.470467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.605 [2024-11-27 11:13:02.470474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:33.605 [2024-11-27 11:13:02.470483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:33.605 [2024-11-27 11:13:02.470490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:33.605 [2024-11-27 11:13:02.470505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:33.605 [2024-11-27 11:13:02.470529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:33.605 [2024-11-27 11:13:02.470554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:33.605 [2024-11-27 11:13:02.470563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.605 [2024-11-27 11:13:02.470570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:33.605 [2024-11-27 11:13:02.470579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:33.606 [2024-11-27 11:13:02.470587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.606 [2024-11-27 11:13:02.470598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:33.606 [2024-11-27 11:13:02.470605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:33.606 [2024-11-27 11:13:02.470613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.606 [2024-11-27 11:13:02.470620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:33.606 [2024-11-27 11:13:02.470629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:33.606 [2024-11-27 11:13:02.470636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.606 [2024-11-27 11:13:02.470645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:33.606 [2024-11-27 11:13:02.470652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:33.606 [2024-11-27 11:13:02.470661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.606 [2024-11-27 11:13:02.470667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:33.606 [2024-11-27 11:13:02.470676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:33.606 [2024-11-27 11:13:02.470683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.606 [2024-11-27 11:13:02.470692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:33.606 [2024-11-27 11:13:02.470699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:33.606 [2024-11-27 11:13:02.470708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.606 [2024-11-27 11:13:02.470713] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:33.606 [2024-11-27 11:13:02.470726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:33.606 [2024-11-27 11:13:02.470734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.606 [2024-11-27 11:13:02.470743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.606 [2024-11-27 11:13:02.470750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:33.606 [2024-11-27 11:13:02.470761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:33.606 [2024-11-27 11:13:02.470767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:33.606 [2024-11-27 11:13:02.470776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:33.606 [2024-11-27 11:13:02.470783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:33.606 [2024-11-27 11:13:02.470791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:33.606 [2024-11-27 11:13:02.470803] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:33.606 [2024-11-27 11:13:02.470816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:33.606 [2024-11-27 11:13:02.470837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:33.606 [2024-11-27 11:13:02.470844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:33.606 [2024-11-27 11:13:02.470854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:33.606 [2024-11-27 11:13:02.470861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:33.606 [2024-11-27 11:13:02.470874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:33.606 [2024-11-27 11:13:02.470883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:33.606 [2024-11-27 11:13:02.470907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:33.606 [2024-11-27 11:13:02.470914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:33.606 [2024-11-27 11:13:02.470924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:33.606 [2024-11-27 11:13:02.470966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:33.606 [2024-11-27 11:13:02.470981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:33.606 [2024-11-27 11:13:02.470999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:33.606 [2024-11-27 11:13:02.471007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:33.606 [2024-11-27 11:13:02.471017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:33.606 [2024-11-27 11:13:02.471025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.606 [2024-11-27 11:13:02.471041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:33.606 [2024-11-27 11:13:02.471049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:17:33.606 [2024-11-27 11:13:02.471063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.606 [2024-11-27 11:13:02.471117] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:33.606 [2024-11-27 11:13:02.471130] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:37.810 [2024-11-27 11:13:05.808293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.808549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:37.810 [2024-11-27 11:13:05.808573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3337.163 ms 00:17:37.810 [2024-11-27 11:13:05.808585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.826374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.826557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.810 [2024-11-27 11:13:05.826578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.669 ms 00:17:37.810 [2024-11-27 11:13:05.826592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.826687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.826698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:37.810 [2024-11-27 11:13:05.826713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:37.810 [2024-11-27 11:13:05.826723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.837269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.837423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.810 [2024-11-27 11:13:05.837440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.471 ms 00:17:37.810 [2024-11-27 11:13:05.837450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.837477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.837488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.810 [2024-11-27 11:13:05.837496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.810 [2024-11-27 11:13:05.837507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.838033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.838058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.810 [2024-11-27 11:13:05.838069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:17:37.810 [2024-11-27 11:13:05.838083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.838203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.838216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.810 [2024-11-27 11:13:05.838227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:37.810 [2024-11-27 11:13:05.838237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.845336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.845387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.810 [2024-11-27 11:13:05.845398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.081 ms 00:17:37.810 [2024-11-27 11:13:05.845407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.855056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:37.810 [2024-11-27 11:13:05.861978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.862018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.810 [2024-11-27 11:13:05.862035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.501 ms 00:17:37.810 [2024-11-27 11:13:05.862043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.943856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.943946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:37.810 [2024-11-27 11:13:05.943970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.778 ms 00:17:37.810 [2024-11-27 11:13:05.943979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.944191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.944206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.810 [2024-11-27 11:13:05.944219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:37.810 [2024-11-27 11:13:05.944228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.950616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.950672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:37.810 [2024-11-27 11:13:05.950687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:17:37.810 [2024-11-27 11:13:05.950703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.956093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.956144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:37.810 [2024-11-27 11:13:05.956159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.334 ms 00:17:37.810 [2024-11-27 11:13:05.956166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.956518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.956529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.810 [2024-11-27 11:13:05.956545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:37.810 [2024-11-27 11:13:05.956553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:05.999080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:05.999137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:37.810 [2024-11-27 11:13:05.999159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.500 ms 00:17:37.810 [2024-11-27 11:13:05.999168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:06.006451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:06.006660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:37.810 [2024-11-27 11:13:06.006692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.218 ms 00:17:37.810 [2024-11-27 11:13:06.006702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:06.012735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:06.012790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:37.810 [2024-11-27 11:13:06.012804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.982 ms 00:17:37.810 [2024-11-27 11:13:06.012811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:06.019355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:06.019408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.810 [2024-11-27 11:13:06.019427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.487 ms 00:17:37.810 [2024-11-27 11:13:06.019435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:06.019490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:06.019500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.810 [2024-11-27 11:13:06.019514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:37.810 [2024-11-27 11:13:06.019526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:06.019620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.810 [2024-11-27 11:13:06.019631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.810 [2024-11-27 11:13:06.019642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:37.810 [2024-11-27 11:13:06.019649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.810 [2024-11-27 11:13:06.020816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3564.541 ms, result 0 00:17:37.810 { 00:17:37.810 "name": "ftl0", 00:17:37.810 "uuid": "8e1bb8e8-a915-4705-bdec-8344fa66f267" 00:17:37.810 } 00:17:37.810 11:13:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:37.810 11:13:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:37.810 11:13:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:37.810 11:13:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:37.810 [2024-11-27 11:13:06.364472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:37.810 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:37.810 Zero copy mechanism will not be used. 00:17:37.810 Running I/O for 4 seconds... 00:17:39.700 903.00 IOPS, 59.96 MiB/s [2024-11-27T11:13:09.530Z] 788.00 IOPS, 52.33 MiB/s [2024-11-27T11:13:10.477Z] 856.33 IOPS, 56.87 MiB/s [2024-11-27T11:13:10.477Z] 817.25 IOPS, 54.27 MiB/s 00:17:41.594 Latency(us) 00:17:41.594 [2024-11-27T11:13:10.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.594 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:41.594 ftl0 : 4.00 817.14 54.26 0.00 0.00 1293.83 322.95 3276.80 00:17:41.594 [2024-11-27T11:13:10.477Z] =================================================================================================================== 00:17:41.594 [2024-11-27T11:13:10.477Z] Total : 817.14 54.26 0.00 0.00 1293.83 322.95 3276.80 00:17:41.595 { 00:17:41.595 "results": [ 00:17:41.595 { 00:17:41.595 "job": "ftl0", 00:17:41.595 "core_mask": "0x1", 00:17:41.595 "workload": "randwrite", 00:17:41.595 "status": "finished", 00:17:41.595 "queue_depth": 1, 00:17:41.595 "io_size": 69632, 00:17:41.595 "runtime": 4.001757, 00:17:41.595 "iops": 817.1410707846578, 00:17:41.595 "mibps": 54.26327423179368, 00:17:41.595 "io_failed": 0, 00:17:41.595 "io_timeout": 0, 00:17:41.595 "avg_latency_us": 1293.832069630675, 00:17:41.595 "min_latency_us": 322.95384615384614, 00:17:41.595 "max_latency_us": 3276.8 00:17:41.595 } 00:17:41.595 ], 00:17:41.595 "core_count": 1 00:17:41.595 } 00:17:41.595 [2024-11-27 11:13:10.373722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:41.595 11:13:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:41.856 [2024-11-27 11:13:10.482499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:41.856 Running I/O for 4 seconds... 00:17:43.741 6612.00 IOPS, 25.83 MiB/s [2024-11-27T11:13:13.571Z] 5811.00 IOPS, 22.70 MiB/s [2024-11-27T11:13:14.516Z] 5568.67 IOPS, 21.75 MiB/s [2024-11-27T11:13:14.778Z] 5513.50 IOPS, 21.54 MiB/s 00:17:45.895 Latency(us) 00:17:45.895 [2024-11-27T11:13:14.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.895 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.895 ftl0 : 4.03 5503.67 21.50 0.00 0.00 23163.78 302.47 48799.11 00:17:45.895 [2024-11-27T11:13:14.778Z] =================================================================================================================== 00:17:45.895 [2024-11-27T11:13:14.778Z] Total : 5503.67 21.50 0.00 0.00 23163.78 0.00 48799.11 00:17:45.895 [2024-11-27 11:13:14.521114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:45.895 { 00:17:45.895 "results": [ 00:17:45.895 { 00:17:45.895 "job": "ftl0", 00:17:45.895 "core_mask": "0x1", 00:17:45.895 "workload": "randwrite", 00:17:45.895 "status": "finished", 00:17:45.895 "queue_depth": 128, 00:17:45.895 "io_size": 4096, 00:17:45.895 "runtime": 4.0304, 00:17:45.895 "iops": 5503.67209210004, 00:17:45.895 "mibps": 21.49871910976578, 00:17:45.895 "io_failed": 0, 00:17:45.895 "io_timeout": 0, 00:17:45.895 "avg_latency_us": 23163.780899551264, 00:17:45.895 "min_latency_us": 302.4738461538462, 00:17:45.895 "max_latency_us": 48799.11384615384 00:17:45.895 } 00:17:45.895 ], 00:17:45.895 "core_count": 1 00:17:45.895 } 00:17:45.895 11:13:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:45.895 [2024-11-27 11:13:14.632241] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:45.895 Running I/O for 4 seconds... 00:17:47.784 4467.00 IOPS, 17.45 MiB/s [2024-11-27T11:13:17.704Z] 4389.00 IOPS, 17.14 MiB/s [2024-11-27T11:13:18.649Z] 4383.67 IOPS, 17.12 MiB/s [2024-11-27T11:13:18.911Z] 4554.50 IOPS, 17.79 MiB/s 00:17:50.028 Latency(us) 00:17:50.028 [2024-11-27T11:13:18.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.028 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:50.028 Verification LBA range: start 0x0 length 0x1400000 00:17:50.028 ftl0 : 4.02 4568.73 17.85 0.00 0.00 27933.97 392.27 40733.14 00:17:50.028 [2024-11-27T11:13:18.911Z] =================================================================================================================== 00:17:50.028 [2024-11-27T11:13:18.911Z] Total : 4568.73 17.85 0.00 0.00 27933.97 0.00 40733.14 00:17:50.028 [2024-11-27 11:13:18.656699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:50.028 { 00:17:50.028 "results": [ 00:17:50.028 { 00:17:50.028 "job": "ftl0", 00:17:50.028 "core_mask": "0x1", 00:17:50.028 "workload": "verify", 00:17:50.028 "status": "finished", 00:17:50.028 "verify_range": { 00:17:50.028 "start": 0, 00:17:50.028 "length": 20971520 00:17:50.028 }, 00:17:50.028 "queue_depth": 128, 00:17:50.028 "io_size": 4096, 00:17:50.028 "runtime": 4.015562, 00:17:50.028 "iops": 4568.725373932714, 00:17:50.028 "mibps": 17.846583491924665, 00:17:50.028 "io_failed": 0, 00:17:50.028 "io_timeout": 0, 00:17:50.028 "avg_latency_us": 27933.966054893546, 00:17:50.028 "min_latency_us": 392.27076923076925, 00:17:50.028 "max_latency_us": 40733.14461538461 00:17:50.028 } 00:17:50.028 ], 00:17:50.028 "core_count": 1 00:17:50.028 } 00:17:50.028 11:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:50.028 [2024-11-27 11:13:18.850502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-11-27 11:13:18.850541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:50.028 [2024-11-27 11:13:18.850555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:50.028 [2024-11-27 11:13:18.850563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.028 [2024-11-27 11:13:18.850586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.028 [2024-11-27 11:13:18.851054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-11-27 11:13:18.851080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:50.028 [2024-11-27 11:13:18.851089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:17:50.028 [2024-11-27 11:13:18.851100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.028 [2024-11-27 11:13:18.853623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.028 [2024-11-27 11:13:18.853660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:50.028 [2024-11-27 11:13:18.853669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:17:50.028 [2024-11-27 11:13:18.853685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.070487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.070541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:50.292 [2024-11-27 11:13:19.070553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.785 ms 00:17:50.292 [2024-11-27 11:13:19.070563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.076710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.076740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:50.292 [2024-11-27 11:13:19.076751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:17:50.292 [2024-11-27 11:13:19.076761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.079007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.079042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:50.292 [2024-11-27 11:13:19.079051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:17:50.292 [2024-11-27 11:13:19.079063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.083703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.083742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:50.292 [2024-11-27 11:13:19.083752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.610 ms 00:17:50.292 [2024-11-27 11:13:19.083769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.083875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.083887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:50.292 [2024-11-27 11:13:19.083912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:50.292 [2024-11-27 11:13:19.083921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.086266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.086301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:50.292 [2024-11-27 11:13:19.086311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:17:50.292 [2024-11-27 11:13:19.086320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.088652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.088688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:50.292 [2024-11-27 11:13:19.088697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:17:50.292 [2024-11-27 11:13:19.088705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.090454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.090487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:50.292 [2024-11-27 11:13:19.090496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:17:50.292 [2024-11-27 11:13:19.090507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.092114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.292 [2024-11-27 11:13:19.092146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:50.292 [2024-11-27 11:13:19.092155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:17:50.292 [2024-11-27 11:13:19.092163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.292 [2024-11-27 11:13:19.092190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:50.292 [2024-11-27 11:13:19.092208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:50.292 [2024-11-27 11:13:19.092505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.092998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:50.293 [2024-11-27 11:13:19.093095] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:50.293 [2024-11-27 11:13:19.093103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e1bb8e8-a915-4705-bdec-8344fa66f267 00:17:50.293 [2024-11-27 11:13:19.093112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:50.293 [2024-11-27 11:13:19.093121] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:50.293 [2024-11-27 11:13:19.093131] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:50.293 [2024-11-27 11:13:19.093139] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:50.293 [2024-11-27 11:13:19.093153] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:50.293 [2024-11-27 11:13:19.093161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:50.293 [2024-11-27 11:13:19.093170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:50.293 [2024-11-27 11:13:19.093179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:50.293 [2024-11-27 11:13:19.093186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:50.293 [2024-11-27 11:13:19.093194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.293 [2024-11-27 11:13:19.093203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:50.293 [2024-11-27 11:13:19.093211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:17:50.293 [2024-11-27 11:13:19.093222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.293 [2024-11-27 11:13:19.094613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.293 [2024-11-27 11:13:19.094636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:50.293 [2024-11-27 11:13:19.094645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:17:50.293 [2024-11-27 11:13:19.094654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.293 [2024-11-27 11:13:19.094738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.293 [2024-11-27 11:13:19.094748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:50.293 [2024-11-27 11:13:19.094756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:50.293 [2024-11-27 11:13:19.094768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.293 [2024-11-27 11:13:19.099303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.293 [2024-11-27 11:13:19.099441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.293 [2024-11-27 11:13:19.099457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.293 [2024-11-27 11:13:19.099466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.099518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.099527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.294 [2024-11-27 11:13:19.099535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.099544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.099604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.099615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.294 [2024-11-27 11:13:19.099623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.099632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.099645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.099654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.294 [2024-11-27 11:13:19.099662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.099672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.108249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.108293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.294 [2024-11-27 11:13:19.108304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.108314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.115788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.115827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.294 [2024-11-27 11:13:19.115837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.115846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.115886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.115916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.294 [2024-11-27 11:13:19.115924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.115934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.115978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.115990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.294 [2024-11-27 11:13:19.115998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.116009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.116073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.116087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.294 [2024-11-27 11:13:19.116094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.116104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.116130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.116140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:50.294 [2024-11-27 11:13:19.116148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.116157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.116188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.116198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.294 [2024-11-27 11:13:19.116207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.116216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.116261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.294 [2024-11-27 11:13:19.116273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.294 [2024-11-27 11:13:19.116283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.294 [2024-11-27 11:13:19.116293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.294 [2024-11-27 11:13:19.116405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.875 ms, result 0 00:17:50.294 true 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85123 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85123 ']' 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85123 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85123 00:17:50.294 killing process with pid 85123 00:17:50.294 Received shutdown signal, test time was about 4.000000 seconds 00:17:50.294 00:17:50.294 Latency(us) 00:17:50.294 [2024-11-27T11:13:19.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.294 [2024-11-27T11:13:19.177Z] =================================================================================================================== 00:17:50.294 [2024-11-27T11:13:19.177Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85123' 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85123 00:17:50.294 11:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85123 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:52.211 Remove shared memory files 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:52.211 ************************************ 00:17:52.211 END TEST ftl_bdevperf 00:17:52.211 ************************************ 00:17:52.211 00:17:52.211 real 0m22.489s 00:17:52.211 user 0m25.115s 00:17:52.211 sys 0m1.033s 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.211 11:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:52.211 11:13:20 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:52.211 11:13:20 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:52.211 11:13:20 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.211 11:13:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.211 ************************************ 00:17:52.211 START TEST ftl_trim 00:17:52.211 ************************************ 00:17:52.211 11:13:20 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:52.211 * Looking for test storage... 00:17:52.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.211 11:13:20 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.211 11:13:20 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.211 11:13:20 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.211 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:52.211 11:13:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.212 11:13:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:52.212 11:13:21 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.212 11:13:21 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.212 11:13:21 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.212 11:13:21 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.212 --rc genhtml_branch_coverage=1 00:17:52.212 --rc genhtml_function_coverage=1 00:17:52.212 --rc genhtml_legend=1 00:17:52.212 --rc geninfo_all_blocks=1 00:17:52.212 --rc geninfo_unexecuted_blocks=1 00:17:52.212 00:17:52.212 ' 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.212 --rc genhtml_branch_coverage=1 00:17:52.212 --rc genhtml_function_coverage=1 00:17:52.212 --rc genhtml_legend=1 00:17:52.212 --rc geninfo_all_blocks=1 00:17:52.212 --rc geninfo_unexecuted_blocks=1 00:17:52.212 00:17:52.212 ' 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.212 --rc genhtml_branch_coverage=1 00:17:52.212 --rc genhtml_function_coverage=1 00:17:52.212 --rc genhtml_legend=1 00:17:52.212 --rc geninfo_all_blocks=1 00:17:52.212 --rc geninfo_unexecuted_blocks=1 00:17:52.212 00:17:52.212 ' 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.212 --rc genhtml_branch_coverage=1 00:17:52.212 --rc genhtml_function_coverage=1 00:17:52.212 --rc genhtml_legend=1 00:17:52.212 --rc geninfo_all_blocks=1 00:17:52.212 --rc geninfo_unexecuted_blocks=1 00:17:52.212 00:17:52.212 ' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85465 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:52.212 11:13:21 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85465 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85465 ']' 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.212 11:13:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:52.473 [2024-11-27 11:13:21.167699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:52.473 [2024-11-27 11:13:21.168092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85465 ] 00:17:52.473 [2024-11-27 11:13:21.322187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.734 [2024-11-27 11:13:21.374887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.734 [2024-11-27 11:13:21.375198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.734 [2024-11-27 11:13:21.375245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.306 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.306 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:53.306 11:13:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:53.306 11:13:22 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:53.306 11:13:22 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:53.306 11:13:22 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:53.306 11:13:22 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:53.306 11:13:22 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:53.567 11:13:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:53.567 11:13:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:53.567 11:13:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:53.567 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:53.567 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:53.567 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:53.567 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:53.567 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:53.830 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:53.830 { 00:17:53.830 "name": "nvme0n1", 00:17:53.830 "aliases": [ 00:17:53.830 "c08fb880-4c7e-4af0-956d-ab41f18c355e" 00:17:53.830 ], 00:17:53.830 "product_name": "NVMe disk", 00:17:53.830 "block_size": 4096, 00:17:53.830 "num_blocks": 1310720, 00:17:53.830 "uuid": "c08fb880-4c7e-4af0-956d-ab41f18c355e", 00:17:53.830 "numa_id": -1, 00:17:53.830 "assigned_rate_limits": { 00:17:53.830 "rw_ios_per_sec": 0, 00:17:53.830 "rw_mbytes_per_sec": 0, 00:17:53.830 "r_mbytes_per_sec": 0, 00:17:53.830 "w_mbytes_per_sec": 0 00:17:53.830 }, 00:17:53.830 "claimed": true, 00:17:53.830 "claim_type": "read_many_write_one", 00:17:53.830 "zoned": false, 00:17:53.830 "supported_io_types": { 00:17:53.830 "read": true, 00:17:53.830 "write": true, 00:17:53.830 "unmap": true, 00:17:53.830 "flush": true, 00:17:53.830 "reset": true, 00:17:53.830 "nvme_admin": true, 00:17:53.830 "nvme_io": true, 00:17:53.830 "nvme_io_md": false, 00:17:53.830 "write_zeroes": true, 00:17:53.830 "zcopy": false, 00:17:53.830 "get_zone_info": false, 00:17:53.830 "zone_management": false, 00:17:53.830 "zone_append": false, 00:17:53.830 "compare": true, 00:17:53.830 "compare_and_write": false, 00:17:53.830 "abort": true, 00:17:53.830 "seek_hole": false, 00:17:53.830 "seek_data": false, 00:17:53.830 "copy": true, 00:17:53.830 "nvme_iov_md": false 00:17:53.830 }, 00:17:53.830 "driver_specific": { 00:17:53.830 "nvme": [ 00:17:53.830 { 00:17:53.830 "pci_address": "0000:00:11.0", 00:17:53.830 "trid": { 00:17:53.830 "trtype": "PCIe", 00:17:53.830 "traddr": "0000:00:11.0" 00:17:53.830 }, 00:17:53.830 "ctrlr_data": { 00:17:53.830 "cntlid": 0, 00:17:53.830 "vendor_id": "0x1b36", 00:17:53.830 "model_number": "QEMU NVMe Ctrl", 00:17:53.830 "serial_number": "12341", 00:17:53.830 "firmware_revision": "8.0.0", 00:17:53.830 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:53.830 "oacs": { 00:17:53.830 "security": 0, 00:17:53.830 "format": 1, 00:17:53.830 "firmware": 0, 00:17:53.830 "ns_manage": 1 00:17:53.830 }, 00:17:53.830 "multi_ctrlr": false, 00:17:53.830 "ana_reporting": false 00:17:53.830 }, 00:17:53.830 "vs": { 00:17:53.831 "nvme_version": "1.4" 00:17:53.831 }, 00:17:53.831 "ns_data": { 00:17:53.831 "id": 1, 00:17:53.831 "can_share": false 00:17:53.831 } 00:17:53.831 } 00:17:53.831 ], 00:17:53.831 "mp_policy": "active_passive" 00:17:53.831 } 00:17:53.831 } 00:17:53.831 ]' 00:17:53.831 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:53.831 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:53.831 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:53.831 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:53.831 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:53.831 11:13:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:53.831 11:13:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:53.831 11:13:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:53.831 11:13:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:53.831 11:13:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:53.831 11:13:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.093 11:13:22 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=bc92c43b-2067-4087-b8be-582997688a1f 00:17:54.093 11:13:22 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:54.093 11:13:22 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc92c43b-2067-4087-b8be-582997688a1f 00:17:54.354 11:13:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:54.617 11:13:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1f041955-5ce1-4595-86ad-feb27b37031c 00:17:54.617 11:13:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1f041955-5ce1-4595-86ad-feb27b37031c 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:54.883 11:13:23 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:54.883 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:54.883 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:54.883 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:54.883 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:54.883 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:54.883 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:54.883 { 00:17:54.883 "name": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:54.883 "aliases": [ 00:17:54.883 "lvs/nvme0n1p0" 00:17:54.883 ], 00:17:54.883 "product_name": "Logical Volume", 00:17:54.883 "block_size": 4096, 00:17:54.883 "num_blocks": 26476544, 00:17:54.883 "uuid": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:54.883 "assigned_rate_limits": { 00:17:54.883 "rw_ios_per_sec": 0, 00:17:54.883 "rw_mbytes_per_sec": 0, 00:17:54.883 "r_mbytes_per_sec": 0, 00:17:54.883 "w_mbytes_per_sec": 0 00:17:54.883 }, 00:17:54.883 "claimed": false, 00:17:54.883 "zoned": false, 00:17:54.883 "supported_io_types": { 00:17:54.883 "read": true, 00:17:54.883 "write": true, 00:17:54.883 "unmap": true, 00:17:54.883 "flush": false, 00:17:54.883 "reset": true, 00:17:54.883 "nvme_admin": false, 00:17:54.883 "nvme_io": false, 00:17:54.883 "nvme_io_md": false, 00:17:54.883 "write_zeroes": true, 00:17:54.883 "zcopy": false, 00:17:54.883 "get_zone_info": false, 00:17:54.883 "zone_management": false, 00:17:54.883 "zone_append": false, 00:17:54.883 "compare": false, 00:17:54.883 "compare_and_write": false, 00:17:54.883 "abort": false, 00:17:54.883 "seek_hole": true, 00:17:54.883 "seek_data": true, 00:17:54.883 "copy": false, 00:17:54.883 "nvme_iov_md": false 00:17:54.883 }, 00:17:54.883 "driver_specific": { 00:17:54.883 "lvol": { 00:17:54.883 "lvol_store_uuid": "1f041955-5ce1-4595-86ad-feb27b37031c", 00:17:54.883 "base_bdev": "nvme0n1", 00:17:54.883 "thin_provision": true, 00:17:54.883 "num_allocated_clusters": 0, 00:17:54.883 "snapshot": false, 00:17:54.883 "clone": false, 00:17:54.883 "esnap_clone": false 00:17:54.883 } 00:17:54.883 } 00:17:54.884 } 00:17:54.884 ]' 00:17:54.884 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:54.884 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:54.884 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.147 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:55.147 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:55.147 11:13:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:55.147 11:13:23 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:55.147 11:13:23 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:55.147 11:13:23 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:55.409 11:13:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:55.409 11:13:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:55.409 11:13:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:55.409 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:55.409 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.409 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:55.409 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:55.409 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:55.409 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:55.409 { 00:17:55.409 "name": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:55.409 "aliases": [ 00:17:55.409 "lvs/nvme0n1p0" 00:17:55.409 ], 00:17:55.409 "product_name": "Logical Volume", 00:17:55.409 "block_size": 4096, 00:17:55.409 "num_blocks": 26476544, 00:17:55.409 "uuid": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:55.409 "assigned_rate_limits": { 00:17:55.409 "rw_ios_per_sec": 0, 00:17:55.409 "rw_mbytes_per_sec": 0, 00:17:55.409 "r_mbytes_per_sec": 0, 00:17:55.409 "w_mbytes_per_sec": 0 00:17:55.409 }, 00:17:55.409 "claimed": false, 00:17:55.409 "zoned": false, 00:17:55.409 "supported_io_types": { 00:17:55.409 "read": true, 00:17:55.409 "write": true, 00:17:55.409 "unmap": true, 00:17:55.409 "flush": false, 00:17:55.409 "reset": true, 00:17:55.409 "nvme_admin": false, 00:17:55.409 "nvme_io": false, 00:17:55.409 "nvme_io_md": false, 00:17:55.409 "write_zeroes": true, 00:17:55.409 "zcopy": false, 00:17:55.409 "get_zone_info": false, 00:17:55.409 "zone_management": false, 00:17:55.409 "zone_append": false, 00:17:55.409 "compare": false, 00:17:55.409 "compare_and_write": false, 00:17:55.409 "abort": false, 00:17:55.409 "seek_hole": true, 00:17:55.409 "seek_data": true, 00:17:55.409 "copy": false, 00:17:55.409 "nvme_iov_md": false 00:17:55.409 }, 00:17:55.409 "driver_specific": { 00:17:55.409 "lvol": { 00:17:55.409 "lvol_store_uuid": "1f041955-5ce1-4595-86ad-feb27b37031c", 00:17:55.409 "base_bdev": "nvme0n1", 00:17:55.409 "thin_provision": true, 00:17:55.409 "num_allocated_clusters": 0, 00:17:55.410 "snapshot": false, 00:17:55.410 "clone": false, 00:17:55.410 "esnap_clone": false 00:17:55.410 } 00:17:55.410 } 00:17:55.410 } 00:17:55.410 ]' 00:17:55.410 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:55.669 11:13:24 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:55.669 11:13:24 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:55.669 11:13:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:55.669 11:13:24 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:55.669 11:13:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:55.669 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:55.927 { 00:17:55.927 "name": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:55.927 "aliases": [ 00:17:55.927 "lvs/nvme0n1p0" 00:17:55.927 ], 00:17:55.927 "product_name": "Logical Volume", 00:17:55.927 "block_size": 4096, 00:17:55.927 "num_blocks": 26476544, 00:17:55.927 "uuid": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:55.927 "assigned_rate_limits": { 00:17:55.927 "rw_ios_per_sec": 0, 00:17:55.927 "rw_mbytes_per_sec": 0, 00:17:55.927 "r_mbytes_per_sec": 0, 00:17:55.927 "w_mbytes_per_sec": 0 00:17:55.927 }, 00:17:55.927 "claimed": false, 00:17:55.927 "zoned": false, 00:17:55.927 "supported_io_types": { 00:17:55.927 "read": true, 00:17:55.927 "write": true, 00:17:55.927 "unmap": true, 00:17:55.927 "flush": false, 00:17:55.927 "reset": true, 00:17:55.927 "nvme_admin": false, 00:17:55.927 "nvme_io": false, 00:17:55.927 "nvme_io_md": false, 00:17:55.927 "write_zeroes": true, 00:17:55.927 "zcopy": false, 00:17:55.927 "get_zone_info": false, 00:17:55.927 "zone_management": false, 00:17:55.927 "zone_append": false, 00:17:55.927 "compare": false, 00:17:55.927 "compare_and_write": false, 00:17:55.927 "abort": false, 00:17:55.927 "seek_hole": true, 00:17:55.927 "seek_data": true, 00:17:55.927 "copy": false, 00:17:55.927 "nvme_iov_md": false 00:17:55.927 }, 00:17:55.927 "driver_specific": { 00:17:55.927 "lvol": { 00:17:55.927 "lvol_store_uuid": "1f041955-5ce1-4595-86ad-feb27b37031c", 00:17:55.927 "base_bdev": "nvme0n1", 00:17:55.927 "thin_provision": true, 00:17:55.927 "num_allocated_clusters": 0, 00:17:55.927 "snapshot": false, 00:17:55.927 "clone": false, 00:17:55.927 "esnap_clone": false 00:17:55.927 } 00:17:55.927 } 00:17:55.927 } 00:17:55.927 ]' 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:55.927 11:13:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:55.927 11:13:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:55.927 11:13:24 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7e2ff4be-818d-4a94-86c5-6cb3a2b873eb -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:56.186 [2024-11-27 11:13:24.981820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.981865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.186 [2024-11-27 11:13:24.981878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.186 [2024-11-27 11:13:24.981914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.984295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.984331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.186 [2024-11-27 11:13:24.984350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.350 ms 00:17:56.186 [2024-11-27 11:13:24.984362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.984464] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.186 [2024-11-27 11:13:24.984697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.186 [2024-11-27 11:13:24.984711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.984722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.186 [2024-11-27 11:13:24.984732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:56.186 [2024-11-27 11:13:24.984741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.984838] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:17:56.186 [2024-11-27 11:13:24.985929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.985958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:56.186 [2024-11-27 11:13:24.985969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:56.186 [2024-11-27 11:13:24.985977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.991314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.991435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.186 [2024-11-27 11:13:24.991452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:17:56.186 [2024-11-27 11:13:24.991460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.991753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.991769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.186 [2024-11-27 11:13:24.991779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:56.186 [2024-11-27 11:13:24.991914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.991970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.991982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.186 [2024-11-27 11:13:24.991991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:56.186 [2024-11-27 11:13:24.991998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.186 [2024-11-27 11:13:24.992035] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:56.186 [2024-11-27 11:13:24.993513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.186 [2024-11-27 11:13:24.993543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.186 [2024-11-27 11:13:24.993552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:17:56.187 [2024-11-27 11:13:24.993561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.187 [2024-11-27 11:13:24.993609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.187 [2024-11-27 11:13:24.993621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.187 [2024-11-27 11:13:24.993629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:56.187 [2024-11-27 11:13:24.993639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.187 [2024-11-27 11:13:24.993672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:56.187 [2024-11-27 11:13:24.993805] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:56.187 [2024-11-27 11:13:24.993818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.187 [2024-11-27 11:13:24.993830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:56.187 [2024-11-27 11:13:24.993840] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.187 [2024-11-27 11:13:24.993851] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.187 [2024-11-27 11:13:24.993874] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:56.187 [2024-11-27 11:13:24.993883] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.187 [2024-11-27 11:13:24.993902] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:56.187 [2024-11-27 11:13:24.993913] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:56.187 [2024-11-27 11:13:24.993921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.187 [2024-11-27 11:13:24.993929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.187 [2024-11-27 11:13:24.993937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:56.187 [2024-11-27 11:13:24.993948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.187 [2024-11-27 11:13:24.994049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.187 [2024-11-27 11:13:24.994060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.187 [2024-11-27 11:13:24.994067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:56.187 [2024-11-27 11:13:24.994076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.187 [2024-11-27 11:13:24.994208] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.187 [2024-11-27 11:13:24.994225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.187 [2024-11-27 11:13:24.994234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.187 [2024-11-27 11:13:24.994264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.187 [2024-11-27 11:13:24.994289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.187 [2024-11-27 11:13:24.994305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.187 [2024-11-27 11:13:24.994315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:56.187 [2024-11-27 11:13:24.994322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.187 [2024-11-27 11:13:24.994334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.187 [2024-11-27 11:13:24.994342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:56.187 [2024-11-27 11:13:24.994351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.187 [2024-11-27 11:13:24.994368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.187 [2024-11-27 11:13:24.994391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.187 [2024-11-27 11:13:24.994416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.187 [2024-11-27 11:13:24.994439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.187 [2024-11-27 11:13:24.994466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.187 [2024-11-27 11:13:24.994490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.187 [2024-11-27 11:13:24.994508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.187 [2024-11-27 11:13:24.994516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:56.187 [2024-11-27 11:13:24.994524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.187 [2024-11-27 11:13:24.994533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:56.187 [2024-11-27 11:13:24.994541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:56.187 [2024-11-27 11:13:24.994548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:56.187 [2024-11-27 11:13:24.994564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:56.187 [2024-11-27 11:13:24.994571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994578] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.187 [2024-11-27 11:13:24.994586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.187 [2024-11-27 11:13:24.994596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.187 [2024-11-27 11:13:24.994612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.187 [2024-11-27 11:13:24.994619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.187 [2024-11-27 11:13:24.994627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.187 [2024-11-27 11:13:24.994633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.187 [2024-11-27 11:13:24.994641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.187 [2024-11-27 11:13:24.994648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.187 [2024-11-27 11:13:24.994658] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.187 [2024-11-27 11:13:24.994667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:56.187 [2024-11-27 11:13:24.994684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:56.187 [2024-11-27 11:13:24.994692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:56.187 [2024-11-27 11:13:24.994699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:56.187 [2024-11-27 11:13:24.994708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:56.187 [2024-11-27 11:13:24.994715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:56.187 [2024-11-27 11:13:24.994725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:56.187 [2024-11-27 11:13:24.994732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:56.187 [2024-11-27 11:13:24.994741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:56.187 [2024-11-27 11:13:24.994748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:56.187 [2024-11-27 11:13:24.994790] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.187 [2024-11-27 11:13:24.994799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.187 [2024-11-27 11:13:24.994830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.187 [2024-11-27 11:13:24.994838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.188 [2024-11-27 11:13:24.994846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.188 [2024-11-27 11:13:24.994855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.188 [2024-11-27 11:13:24.994862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.188 [2024-11-27 11:13:24.994875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:17:56.188 [2024-11-27 11:13:24.994882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.188 [2024-11-27 11:13:24.995202] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:56.188 [2024-11-27 11:13:24.995248] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:58.718 [2024-11-27 11:13:27.335552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.335817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:58.718 [2024-11-27 11:13:27.335964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2340.334 ms 00:17:58.718 [2024-11-27 11:13:27.336059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.355675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.355901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.718 [2024-11-27 11:13:27.356001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.414 ms 00:17:58.718 [2024-11-27 11:13:27.356089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.356311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.356418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.718 [2024-11-27 11:13:27.356495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:58.718 [2024-11-27 11:13:27.356545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.366903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.367019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.718 [2024-11-27 11:13:27.367081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.244 ms 00:17:58.718 [2024-11-27 11:13:27.367105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.367279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.367354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.718 [2024-11-27 11:13:27.367371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.718 [2024-11-27 11:13:27.367378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.367757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.367779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.718 [2024-11-27 11:13:27.367800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:17:58.718 [2024-11-27 11:13:27.367807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.367974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.367993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.718 [2024-11-27 11:13:27.368004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:58.718 [2024-11-27 11:13:27.368012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.373591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.373709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.718 [2024-11-27 11:13:27.373726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.544 ms 00:17:58.718 [2024-11-27 11:13:27.373734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.382650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:58.718 [2024-11-27 11:13:27.397446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.397485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.718 [2024-11-27 11:13:27.397495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.647 ms 00:17:58.718 [2024-11-27 11:13:27.397504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.450643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.450691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:58.718 [2024-11-27 11:13:27.450704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.055 ms 00:17:58.718 [2024-11-27 11:13:27.450716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.450929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.450943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.718 [2024-11-27 11:13:27.450954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:58.718 [2024-11-27 11:13:27.450963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.453911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.718 [2024-11-27 11:13:27.453948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:58.718 [2024-11-27 11:13:27.453958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:17:58.718 [2024-11-27 11:13:27.453968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.718 [2024-11-27 11:13:27.456683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.456814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:58.719 [2024-11-27 11:13:27.456831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:17:58.719 [2024-11-27 11:13:27.456839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.457166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.457184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.719 [2024-11-27 11:13:27.457196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:58.719 [2024-11-27 11:13:27.457207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.483634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.483671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:58.719 [2024-11-27 11:13:27.483681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.396 ms 00:17:58.719 [2024-11-27 11:13:27.483690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.487937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.487978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:58.719 [2024-11-27 11:13:27.487989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:17:58.719 [2024-11-27 11:13:27.488001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.491160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.491193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:58.719 [2024-11-27 11:13:27.491203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:17:58.719 [2024-11-27 11:13:27.491213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.494497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.494621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.719 [2024-11-27 11:13:27.494636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:17:58.719 [2024-11-27 11:13:27.494647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.494702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.494713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.719 [2024-11-27 11:13:27.494721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:58.719 [2024-11-27 11:13:27.494781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.494854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.719 [2024-11-27 11:13:27.494864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.719 [2024-11-27 11:13:27.494873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:58.719 [2024-11-27 11:13:27.494881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.719 [2024-11-27 11:13:27.495749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.719 [2024-11-27 11:13:27.496739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2513.642 ms, result 0 00:17:58.719 [2024-11-27 11:13:27.497496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.719 { 00:17:58.719 "name": "ftl0", 00:17:58.719 "uuid": "5e9e33ba-64fe-44ff-8c18-2e047b084a11" 00:17:58.719 } 00:17:58.719 11:13:27 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:58.719 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:58.719 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:58.719 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:58.719 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:58.719 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:58.719 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.977 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:59.235 [ 00:17:59.235 { 00:17:59.235 "name": "ftl0", 00:17:59.235 "aliases": [ 00:17:59.235 "5e9e33ba-64fe-44ff-8c18-2e047b084a11" 00:17:59.235 ], 00:17:59.236 "product_name": "FTL disk", 00:17:59.236 "block_size": 4096, 00:17:59.236 "num_blocks": 23592960, 00:17:59.236 "uuid": "5e9e33ba-64fe-44ff-8c18-2e047b084a11", 00:17:59.236 "assigned_rate_limits": { 00:17:59.236 "rw_ios_per_sec": 0, 00:17:59.236 "rw_mbytes_per_sec": 0, 00:17:59.236 "r_mbytes_per_sec": 0, 00:17:59.236 "w_mbytes_per_sec": 0 00:17:59.236 }, 00:17:59.236 "claimed": false, 00:17:59.236 "zoned": false, 00:17:59.236 "supported_io_types": { 00:17:59.236 "read": true, 00:17:59.236 "write": true, 00:17:59.236 "unmap": true, 00:17:59.236 "flush": true, 00:17:59.236 "reset": false, 00:17:59.236 "nvme_admin": false, 00:17:59.236 "nvme_io": false, 00:17:59.236 "nvme_io_md": false, 00:17:59.236 "write_zeroes": true, 00:17:59.236 "zcopy": false, 00:17:59.236 "get_zone_info": false, 00:17:59.236 "zone_management": false, 00:17:59.236 "zone_append": false, 00:17:59.236 "compare": false, 00:17:59.236 "compare_and_write": false, 00:17:59.236 "abort": false, 00:17:59.236 "seek_hole": false, 00:17:59.236 "seek_data": false, 00:17:59.236 "copy": false, 00:17:59.236 "nvme_iov_md": false 00:17:59.236 }, 00:17:59.236 "driver_specific": { 00:17:59.236 "ftl": { 00:17:59.236 "base_bdev": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:59.236 "cache": "nvc0n1p0" 00:17:59.236 } 00:17:59.236 } 00:17:59.236 } 00:17:59.236 ] 00:17:59.236 11:13:27 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:59.236 11:13:27 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:59.236 11:13:27 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.494 11:13:28 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:59.494 11:13:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:59.494 11:13:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:59.494 { 00:17:59.494 "name": "ftl0", 00:17:59.494 "aliases": [ 00:17:59.494 "5e9e33ba-64fe-44ff-8c18-2e047b084a11" 00:17:59.494 ], 00:17:59.494 "product_name": "FTL disk", 00:17:59.494 "block_size": 4096, 00:17:59.494 "num_blocks": 23592960, 00:17:59.494 "uuid": "5e9e33ba-64fe-44ff-8c18-2e047b084a11", 00:17:59.494 "assigned_rate_limits": { 00:17:59.494 "rw_ios_per_sec": 0, 00:17:59.494 "rw_mbytes_per_sec": 0, 00:17:59.494 "r_mbytes_per_sec": 0, 00:17:59.494 "w_mbytes_per_sec": 0 00:17:59.494 }, 00:17:59.494 "claimed": false, 00:17:59.494 "zoned": false, 00:17:59.494 "supported_io_types": { 00:17:59.494 "read": true, 00:17:59.494 "write": true, 00:17:59.494 "unmap": true, 00:17:59.494 "flush": true, 00:17:59.494 "reset": false, 00:17:59.494 "nvme_admin": false, 00:17:59.494 "nvme_io": false, 00:17:59.494 "nvme_io_md": false, 00:17:59.494 "write_zeroes": true, 00:17:59.494 "zcopy": false, 00:17:59.494 "get_zone_info": false, 00:17:59.494 "zone_management": false, 00:17:59.494 "zone_append": false, 00:17:59.494 "compare": false, 00:17:59.494 "compare_and_write": false, 00:17:59.494 "abort": false, 00:17:59.494 "seek_hole": false, 00:17:59.494 "seek_data": false, 00:17:59.494 "copy": false, 00:17:59.494 "nvme_iov_md": false 00:17:59.494 }, 00:17:59.494 "driver_specific": { 00:17:59.494 "ftl": { 00:17:59.494 "base_bdev": "7e2ff4be-818d-4a94-86c5-6cb3a2b873eb", 00:17:59.494 "cache": "nvc0n1p0" 00:17:59.494 } 00:17:59.494 } 00:17:59.494 } 00:17:59.494 ]' 00:17:59.494 11:13:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:59.494 11:13:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:59.494 11:13:28 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:59.754 [2024-11-27 11:13:28.546125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.546167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.754 [2024-11-27 11:13:28.546182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.754 [2024-11-27 11:13:28.546190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.546232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:59.754 [2024-11-27 11:13:28.546689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.546712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.754 [2024-11-27 11:13:28.546722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:17:59.754 [2024-11-27 11:13:28.546731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.547248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.547274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.754 [2024-11-27 11:13:28.547296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:17:59.754 [2024-11-27 11:13:28.547308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.550968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.550989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.754 [2024-11-27 11:13:28.550998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:17:59.754 [2024-11-27 11:13:28.551008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.558247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.558279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.754 [2024-11-27 11:13:28.558288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.173 ms 00:17:59.754 [2024-11-27 11:13:28.558300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.559933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.559968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.754 [2024-11-27 11:13:28.559978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:17:59.754 [2024-11-27 11:13:28.559987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.563761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.563815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.754 [2024-11-27 11:13:28.563826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.730 ms 00:17:59.754 [2024-11-27 11:13:28.563836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.564042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.564071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.754 [2024-11-27 11:13:28.564082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:59.754 [2024-11-27 11:13:28.564091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.565825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.565973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.754 [2024-11-27 11:13:28.565989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:17:59.754 [2024-11-27 11:13:28.566000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.567176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.567206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.754 [2024-11-27 11:13:28.567215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:17:59.754 [2024-11-27 11:13:28.567224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.568089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.568125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.754 [2024-11-27 11:13:28.568133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:17:59.754 [2024-11-27 11:13:28.568142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.569235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.754 [2024-11-27 11:13:28.569347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.754 [2024-11-27 11:13:28.569361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:17:59.754 [2024-11-27 11:13:28.569371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.754 [2024-11-27 11:13:28.569411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.754 [2024-11-27 11:13:28.569428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.754 [2024-11-27 11:13:28.569489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.569992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.755 [2024-11-27 11:13:28.570249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.756 [2024-11-27 11:13:28.570259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.756 [2024-11-27 11:13:28.570267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.756 [2024-11-27 11:13:28.570278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.756 [2024-11-27 11:13:28.570285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.756 [2024-11-27 11:13:28.570302] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.756 [2024-11-27 11:13:28.570309] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:17:59.756 [2024-11-27 11:13:28.570318] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.756 [2024-11-27 11:13:28.570325] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.756 [2024-11-27 11:13:28.570334] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.756 [2024-11-27 11:13:28.570341] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.756 [2024-11-27 11:13:28.570350] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.756 [2024-11-27 11:13:28.570359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.756 [2024-11-27 11:13:28.570367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.756 [2024-11-27 11:13:28.570373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.756 [2024-11-27 11:13:28.570381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.756 [2024-11-27 11:13:28.570388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.756 [2024-11-27 11:13:28.570396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.756 [2024-11-27 11:13:28.570404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:17:59.756 [2024-11-27 11:13:28.570414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.571989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.756 [2024-11-27 11:13:28.572008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.756 [2024-11-27 11:13:28.572016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:17:59.756 [2024-11-27 11:13:28.572027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.572143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.756 [2024-11-27 11:13:28.572158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.756 [2024-11-27 11:13:28.572166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:59.756 [2024-11-27 11:13:28.572175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.577833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.577966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.756 [2024-11-27 11:13:28.578020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.578048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.578181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.578243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.756 [2024-11-27 11:13:28.578310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.578338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.578429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.578456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.756 [2024-11-27 11:13:28.578504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.578529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.578582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.578624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.756 [2024-11-27 11:13:28.578647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.578667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.588281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.588411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.756 [2024-11-27 11:13:28.588460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.588486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.596574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.596710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.756 [2024-11-27 11:13:28.596771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.596858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.597000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.597064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.756 [2024-11-27 11:13:28.597114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.597138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.597200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.597228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.756 [2024-11-27 11:13:28.597253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.597273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.597372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.597419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.756 [2024-11-27 11:13:28.597440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.597460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.597587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.597648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.756 [2024-11-27 11:13:28.597737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.597764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.597990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.598048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.756 [2024-11-27 11:13:28.598164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.598189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.598254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.756 [2024-11-27 11:13:28.598281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.756 [2024-11-27 11:13:28.598300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.756 [2024-11-27 11:13:28.598349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.756 [2024-11-27 11:13:28.598530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.395 ms, result 0 00:17:59.756 true 00:17:59.756 11:13:28 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85465 00:17:59.756 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85465 ']' 00:17:59.756 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85465 00:17:59.756 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:59.756 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.756 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85465 00:18:00.015 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.015 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.015 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85465' 00:18:00.015 killing process with pid 85465 00:18:00.015 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85465 00:18:00.015 11:13:28 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85465 00:18:05.301 11:13:33 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:05.301 65536+0 records in 00:18:05.301 65536+0 records out 00:18:05.301 268435456 bytes (268 MB, 256 MiB) copied, 0.806037 s, 333 MB/s 00:18:05.301 11:13:34 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.301 [2024-11-27 11:13:34.117111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:05.301 [2024-11-27 11:13:34.117243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85619 ] 00:18:05.562 [2024-11-27 11:13:34.268841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.562 [2024-11-27 11:13:34.313224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.562 [2024-11-27 11:13:34.426915] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.562 [2024-11-27 11:13:34.427001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.825 [2024-11-27 11:13:34.583596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.583757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:05.825 [2024-11-27 11:13:34.583777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.825 [2024-11-27 11:13:34.583794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.586056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.586091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.825 [2024-11-27 11:13:34.586102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:18:05.825 [2024-11-27 11:13:34.586109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.586179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:05.825 [2024-11-27 11:13:34.586407] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:05.825 [2024-11-27 11:13:34.586423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.586430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.825 [2024-11-27 11:13:34.586444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:05.825 [2024-11-27 11:13:34.586452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.587651] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:05.825 [2024-11-27 11:13:34.590342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.590457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:05.825 [2024-11-27 11:13:34.590514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:18:05.825 [2024-11-27 11:13:34.590541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.590654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.590862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:05.825 [2024-11-27 11:13:34.590902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:05.825 [2024-11-27 11:13:34.590929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.595745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.595852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.825 [2024-11-27 11:13:34.595964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:18:05.825 [2024-11-27 11:13:34.595994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.596104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.596375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.825 [2024-11-27 11:13:34.596399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:05.825 [2024-11-27 11:13:34.596408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.596453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.596466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:05.825 [2024-11-27 11:13:34.596474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:05.825 [2024-11-27 11:13:34.596488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.596509] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:05.825 [2024-11-27 11:13:34.597847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.597970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.825 [2024-11-27 11:13:34.597990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:18:05.825 [2024-11-27 11:13:34.598001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.598052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.598064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:05.825 [2024-11-27 11:13:34.598074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:05.825 [2024-11-27 11:13:34.598081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.598098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:05.825 [2024-11-27 11:13:34.598115] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:05.825 [2024-11-27 11:13:34.598149] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:05.825 [2024-11-27 11:13:34.598163] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:05.825 [2024-11-27 11:13:34.598268] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:05.825 [2024-11-27 11:13:34.598277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:05.825 [2024-11-27 11:13:34.598287] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:05.825 [2024-11-27 11:13:34.598300] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598309] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598317] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:05.825 [2024-11-27 11:13:34.598324] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:05.825 [2024-11-27 11:13:34.598334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:05.825 [2024-11-27 11:13:34.598341] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:05.825 [2024-11-27 11:13:34.598348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.598357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:05.825 [2024-11-27 11:13:34.598366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:05.825 [2024-11-27 11:13:34.598373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.598460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.825 [2024-11-27 11:13:34.598468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:05.825 [2024-11-27 11:13:34.598475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:05.825 [2024-11-27 11:13:34.598482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.825 [2024-11-27 11:13:34.598585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:05.825 [2024-11-27 11:13:34.598600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:05.825 [2024-11-27 11:13:34.598608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:05.825 [2024-11-27 11:13:34.598639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:05.825 [2024-11-27 11:13:34.598664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.825 [2024-11-27 11:13:34.598679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:05.825 [2024-11-27 11:13:34.598687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:05.825 [2024-11-27 11:13:34.598694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.825 [2024-11-27 11:13:34.598701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:05.825 [2024-11-27 11:13:34.598708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:05.825 [2024-11-27 11:13:34.598715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:05.825 [2024-11-27 11:13:34.598730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:05.825 [2024-11-27 11:13:34.598755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:05.825 [2024-11-27 11:13:34.598777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:05.825 [2024-11-27 11:13:34.598802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:05.825 [2024-11-27 11:13:34.598824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.825 [2024-11-27 11:13:34.598838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:05.825 [2024-11-27 11:13:34.598846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:05.825 [2024-11-27 11:13:34.598853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.825 [2024-11-27 11:13:34.598860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:05.825 [2024-11-27 11:13:34.598868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:05.826 [2024-11-27 11:13:34.598875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.826 [2024-11-27 11:13:34.598882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:05.826 [2024-11-27 11:13:34.598907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:05.826 [2024-11-27 11:13:34.598915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.826 [2024-11-27 11:13:34.598923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:05.826 [2024-11-27 11:13:34.598930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:05.826 [2024-11-27 11:13:34.598936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.826 [2024-11-27 11:13:34.598943] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:05.826 [2024-11-27 11:13:34.598951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:05.826 [2024-11-27 11:13:34.598958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.826 [2024-11-27 11:13:34.598965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.826 [2024-11-27 11:13:34.598972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:05.826 [2024-11-27 11:13:34.598979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:05.826 [2024-11-27 11:13:34.598986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:05.826 [2024-11-27 11:13:34.598993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:05.826 [2024-11-27 11:13:34.598999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:05.826 [2024-11-27 11:13:34.599006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:05.826 [2024-11-27 11:13:34.599013] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:05.826 [2024-11-27 11:13:34.599022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:05.826 [2024-11-27 11:13:34.599039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:05.826 [2024-11-27 11:13:34.599046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:05.826 [2024-11-27 11:13:34.599053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:05.826 [2024-11-27 11:13:34.599060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:05.826 [2024-11-27 11:13:34.599067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:05.826 [2024-11-27 11:13:34.599073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:05.826 [2024-11-27 11:13:34.599080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:05.826 [2024-11-27 11:13:34.599087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:05.826 [2024-11-27 11:13:34.599094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:05.826 [2024-11-27 11:13:34.599134] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:05.826 [2024-11-27 11:13:34.599142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:05.826 [2024-11-27 11:13:34.599159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:05.826 [2024-11-27 11:13:34.599166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:05.826 [2024-11-27 11:13:34.599173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:05.826 [2024-11-27 11:13:34.599180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.599186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:05.826 [2024-11-27 11:13:34.599195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:18:05.826 [2024-11-27 11:13:34.599202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.614875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.614925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.826 [2024-11-27 11:13:34.614937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.624 ms 00:18:05.826 [2024-11-27 11:13:34.614946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.615072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.615084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.826 [2024-11-27 11:13:34.615092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:05.826 [2024-11-27 11:13:34.615103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.623436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.623471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.826 [2024-11-27 11:13:34.623482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.312 ms 00:18:05.826 [2024-11-27 11:13:34.623490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.623536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.623550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.826 [2024-11-27 11:13:34.623562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:05.826 [2024-11-27 11:13:34.623571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.623912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.623928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.826 [2024-11-27 11:13:34.623938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:05.826 [2024-11-27 11:13:34.623946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.624083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.624098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.826 [2024-11-27 11:13:34.624109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:05.826 [2024-11-27 11:13:34.624122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.629186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.629225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.826 [2024-11-27 11:13:34.629236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:18:05.826 [2024-11-27 11:13:34.629244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.631902] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:05.826 [2024-11-27 11:13:34.631932] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:05.826 [2024-11-27 11:13:34.631952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.631959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:05.826 [2024-11-27 11:13:34.631967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:18:05.826 [2024-11-27 11:13:34.631974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.646470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.646594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:05.826 [2024-11-27 11:13:34.646616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.454 ms 00:18:05.826 [2024-11-27 11:13:34.646624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.648837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.648863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:05.826 [2024-11-27 11:13:34.648871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:18:05.826 [2024-11-27 11:13:34.648878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.650885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.650935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:05.826 [2024-11-27 11:13:34.650945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.947 ms 00:18:05.826 [2024-11-27 11:13:34.650958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.651293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.651310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.826 [2024-11-27 11:13:34.651319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:05.826 [2024-11-27 11:13:34.651326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.666853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.666909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:05.826 [2024-11-27 11:13:34.666921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.504 ms 00:18:05.826 [2024-11-27 11:13:34.666929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.826 [2024-11-27 11:13:34.674273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:05.826 [2024-11-27 11:13:34.688312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.826 [2024-11-27 11:13:34.688352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.826 [2024-11-27 11:13:34.688364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.321 ms 00:18:05.826 [2024-11-27 11:13:34.688371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.688461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-27 11:13:34.688474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:05.827 [2024-11-27 11:13:34.688483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:05.827 [2024-11-27 11:13:34.688491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.688541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-27 11:13:34.688553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.827 [2024-11-27 11:13:34.688561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:05.827 [2024-11-27 11:13:34.688568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.688593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-27 11:13:34.688601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.827 [2024-11-27 11:13:34.688609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.827 [2024-11-27 11:13:34.688616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.688650] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:05.827 [2024-11-27 11:13:34.688660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-27 11:13:34.688668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:05.827 [2024-11-27 11:13:34.688677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:05.827 [2024-11-27 11:13:34.688684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.692675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-27 11:13:34.692827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.827 [2024-11-27 11:13:34.692844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:18:05.827 [2024-11-27 11:13:34.692852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.692961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.827 [2024-11-27 11:13:34.692973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.827 [2024-11-27 11:13:34.692985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:05.827 [2024-11-27 11:13:34.692993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.827 [2024-11-27 11:13:34.693794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.827 [2024-11-27 11:13:34.694796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.918 ms, result 0 00:18:05.827 [2024-11-27 11:13:34.695644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.089 [2024-11-27 11:13:34.705270] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.032  [2024-11-27T11:13:36.859Z] Copying: 17/256 [MB] (17 MBps) [2024-11-27T11:13:37.803Z] Copying: 34/256 [MB] (16 MBps) [2024-11-27T11:13:38.739Z] Copying: 47/256 [MB] (12 MBps) [2024-11-27T11:13:40.121Z] Copying: 81/256 [MB] (34 MBps) [2024-11-27T11:13:41.066Z] Copying: 102/256 [MB] (20 MBps) [2024-11-27T11:13:42.011Z] Copying: 123/256 [MB] (20 MBps) [2024-11-27T11:13:42.956Z] Copying: 135/256 [MB] (12 MBps) [2024-11-27T11:13:43.901Z] Copying: 149328/262144 [kB] (10132 kBps) [2024-11-27T11:13:44.841Z] Copying: 159400/262144 [kB] (10072 kBps) [2024-11-27T11:13:45.783Z] Copying: 170/256 [MB] (14 MBps) [2024-11-27T11:13:46.721Z] Copying: 180/256 [MB] (10 MBps) [2024-11-27T11:13:48.148Z] Copying: 197/256 [MB] (16 MBps) [2024-11-27T11:13:48.148Z] Copying: 248/256 [MB] (50 MBps) [2024-11-27T11:13:48.148Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-27 11:13:47.859337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.265 [2024-11-27 11:13:47.860317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.860335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:19.265 [2024-11-27 11:13:47.860345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:19.265 [2024-11-27 11:13:47.860354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.860369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:19.265 [2024-11-27 11:13:47.860724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.860742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:19.265 [2024-11-27 11:13:47.860750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:18:19.265 [2024-11-27 11:13:47.860756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.862068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.862093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:19.265 [2024-11-27 11:13:47.862101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:18:19.265 [2024-11-27 11:13:47.862107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.867049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.867076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:19.265 [2024-11-27 11:13:47.867084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.928 ms 00:18:19.265 [2024-11-27 11:13:47.867089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.872479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.872624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:19.265 [2024-11-27 11:13:47.872641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:18:19.265 [2024-11-27 11:13:47.872647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.873677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.873699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:19.265 [2024-11-27 11:13:47.873706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:18:19.265 [2024-11-27 11:13:47.873711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.876824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.876854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:19.265 [2024-11-27 11:13:47.876861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.082 ms 00:18:19.265 [2024-11-27 11:13:47.876869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.876967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.876974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:19.265 [2024-11-27 11:13:47.876980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:19.265 [2024-11-27 11:13:47.876986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.878552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.878577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:19.265 [2024-11-27 11:13:47.878583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:18:19.265 [2024-11-27 11:13:47.878589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.879747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.879839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:19.265 [2024-11-27 11:13:47.879849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:18:19.265 [2024-11-27 11:13:47.879854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.880660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.265 [2024-11-27 11:13:47.880682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:19.265 [2024-11-27 11:13:47.880688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:18:19.265 [2024-11-27 11:13:47.880693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.265 [2024-11-27 11:13:47.881553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.266 [2024-11-27 11:13:47.881578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:19.266 [2024-11-27 11:13:47.881585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:18:19.266 [2024-11-27 11:13:47.881590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.266 [2024-11-27 11:13:47.881612] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:19.266 [2024-11-27 11:13:47.881627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.881996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:19.266 [2024-11-27 11:13:47.882122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:19.267 [2024-11-27 11:13:47.882205] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:19.267 [2024-11-27 11:13:47.882211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:18:19.267 [2024-11-27 11:13:47.882216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:19.267 [2024-11-27 11:13:47.882226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:19.267 [2024-11-27 11:13:47.882231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:19.267 [2024-11-27 11:13:47.882237] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:19.267 [2024-11-27 11:13:47.882242] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:19.267 [2024-11-27 11:13:47.882248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:19.267 [2024-11-27 11:13:47.882253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:19.267 [2024-11-27 11:13:47.882258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:19.267 [2024-11-27 11:13:47.882263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:19.267 [2024-11-27 11:13:47.882268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.267 [2024-11-27 11:13:47.882273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:19.267 [2024-11-27 11:13:47.882279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:18:19.267 [2024-11-27 11:13:47.882286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.883632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.267 [2024-11-27 11:13:47.883701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:19.267 [2024-11-27 11:13:47.883747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:18:19.267 [2024-11-27 11:13:47.883766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.883852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.267 [2024-11-27 11:13:47.883872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:19.267 [2024-11-27 11:13:47.883925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:19.267 [2024-11-27 11:13:47.883942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.887873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.887973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.267 [2024-11-27 11:13:47.888016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.888033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.888081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.888175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.267 [2024-11-27 11:13:47.888201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.888215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.888255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.888273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.267 [2024-11-27 11:13:47.888320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.888337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.888359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.888375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.267 [2024-11-27 11:13:47.888389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.888409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.895787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.895950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.267 [2024-11-27 11:13:47.895992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.896036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.901993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.267 [2024-11-27 11:13:47.902169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.902218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.267 [2024-11-27 11:13:47.902249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.902327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.267 [2024-11-27 11:13:47.902357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.902439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.267 [2024-11-27 11:13:47.902533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.902581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:19.267 [2024-11-27 11:13:47.902640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.902698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.267 [2024-11-27 11:13:47.902805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.902867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:19.267 [2024-11-27 11:13:47.902886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.267 [2024-11-27 11:13:47.902945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:19.267 [2024-11-27 11:13:47.902963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.267 [2024-11-27 11:13:47.903088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.745 ms, result 0 00:18:19.529 00:18:19.529 00:18:19.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.529 11:13:48 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85771 00:18:19.529 11:13:48 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85771 00:18:19.529 11:13:48 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85771 ']' 00:18:19.529 11:13:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.529 11:13:48 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.529 11:13:48 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:19.529 11:13:48 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.529 11:13:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.529 11:13:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:19.790 [2024-11-27 11:13:48.418451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:19.790 [2024-11-27 11:13:48.418588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85771 ] 00:18:19.790 [2024-11-27 11:13:48.570268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.790 [2024-11-27 11:13:48.621786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.735 11:13:49 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.735 11:13:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:20.735 11:13:49 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:20.735 [2024-11-27 11:13:49.465516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:20.735 [2024-11-27 11:13:49.465590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:20.999 [2024-11-27 11:13:49.642785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.642846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:20.999 [2024-11-27 11:13:49.642862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:20.999 [2024-11-27 11:13:49.642873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.645551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.645610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.999 [2024-11-27 11:13:49.645621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:18:20.999 [2024-11-27 11:13:49.645633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.645738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:20.999 [2024-11-27 11:13:49.646016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:20.999 [2024-11-27 11:13:49.646032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.646043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.999 [2024-11-27 11:13:49.646053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:18:20.999 [2024-11-27 11:13:49.646063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.648033] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:20.999 [2024-11-27 11:13:49.651647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.651697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:20.999 [2024-11-27 11:13:49.651710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:18:20.999 [2024-11-27 11:13:49.651718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.651796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.651806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:20.999 [2024-11-27 11:13:49.651821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:20.999 [2024-11-27 11:13:49.651828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.659696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.659737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.999 [2024-11-27 11:13:49.659750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.817 ms 00:18:20.999 [2024-11-27 11:13:49.659757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.659876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.659928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.999 [2024-11-27 11:13:49.659940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:20.999 [2024-11-27 11:13:49.659948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.659981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.659990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:20.999 [2024-11-27 11:13:49.660000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:20.999 [2024-11-27 11:13:49.660011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.999 [2024-11-27 11:13:49.660038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:20.999 [2024-11-27 11:13:49.662056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.999 [2024-11-27 11:13:49.662097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.999 [2024-11-27 11:13:49.662106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:18:21.000 [2024-11-27 11:13:49.662120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.000 [2024-11-27 11:13:49.662165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.000 [2024-11-27 11:13:49.662179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:21.000 [2024-11-27 11:13:49.662187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:21.000 [2024-11-27 11:13:49.662197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.000 [2024-11-27 11:13:49.662217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:21.000 [2024-11-27 11:13:49.662238] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:21.000 [2024-11-27 11:13:49.662281] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:21.000 [2024-11-27 11:13:49.662300] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:21.000 [2024-11-27 11:13:49.662406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:21.000 [2024-11-27 11:13:49.662422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:21.000 [2024-11-27 11:13:49.662432] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:21.000 [2024-11-27 11:13:49.662445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:21.000 [2024-11-27 11:13:49.662455] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:21.000 [2024-11-27 11:13:49.662467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:21.000 [2024-11-27 11:13:49.662475] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:21.000 [2024-11-27 11:13:49.662483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:21.000 [2024-11-27 11:13:49.662492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:21.000 [2024-11-27 11:13:49.662501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.000 [2024-11-27 11:13:49.662511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:21.000 [2024-11-27 11:13:49.662526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:18:21.000 [2024-11-27 11:13:49.662533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.000 [2024-11-27 11:13:49.662623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.000 [2024-11-27 11:13:49.662631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:21.000 [2024-11-27 11:13:49.662641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:21.000 [2024-11-27 11:13:49.662648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.000 [2024-11-27 11:13:49.662751] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:21.000 [2024-11-27 11:13:49.662762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:21.000 [2024-11-27 11:13:49.662775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.000 [2024-11-27 11:13:49.662784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.662800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:21.000 [2024-11-27 11:13:49.662807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.662818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:21.000 [2024-11-27 11:13:49.662827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:21.000 [2024-11-27 11:13:49.662844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:21.000 [2024-11-27 11:13:49.662852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.000 [2024-11-27 11:13:49.662862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:21.000 [2024-11-27 11:13:49.662869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:21.000 [2024-11-27 11:13:49.662879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.000 [2024-11-27 11:13:49.662909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:21.000 [2024-11-27 11:13:49.662919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:21.000 [2024-11-27 11:13:49.662927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.662936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:21.000 [2024-11-27 11:13:49.662945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:21.000 [2024-11-27 11:13:49.662954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.662962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:21.000 [2024-11-27 11:13:49.662973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:21.000 [2024-11-27 11:13:49.662982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.000 [2024-11-27 11:13:49.662994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:21.000 [2024-11-27 11:13:49.663002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.000 [2024-11-27 11:13:49.663020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:21.000 [2024-11-27 11:13:49.663031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.000 [2024-11-27 11:13:49.663049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:21.000 [2024-11-27 11:13:49.663058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.000 [2024-11-27 11:13:49.663077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:21.000 [2024-11-27 11:13:49.663086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.000 [2024-11-27 11:13:49.663101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:21.000 [2024-11-27 11:13:49.663107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:21.000 [2024-11-27 11:13:49.663118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.000 [2024-11-27 11:13:49.663125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:21.000 [2024-11-27 11:13:49.663134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:21.000 [2024-11-27 11:13:49.663140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:21.000 [2024-11-27 11:13:49.663155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:21.000 [2024-11-27 11:13:49.663165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663171] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:21.000 [2024-11-27 11:13:49.663181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:21.000 [2024-11-27 11:13:49.663188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.000 [2024-11-27 11:13:49.663197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.000 [2024-11-27 11:13:49.663205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:21.000 [2024-11-27 11:13:49.663214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:21.000 [2024-11-27 11:13:49.663221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:21.000 [2024-11-27 11:13:49.663230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:21.000 [2024-11-27 11:13:49.663237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:21.000 [2024-11-27 11:13:49.663248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:21.000 [2024-11-27 11:13:49.663259] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:21.000 [2024-11-27 11:13:49.663272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.000 [2024-11-27 11:13:49.663281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:21.000 [2024-11-27 11:13:49.663292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:21.000 [2024-11-27 11:13:49.663299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:21.000 [2024-11-27 11:13:49.663308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:21.000 [2024-11-27 11:13:49.663316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:21.000 [2024-11-27 11:13:49.663325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:21.000 [2024-11-27 11:13:49.663332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:21.000 [2024-11-27 11:13:49.663340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:21.000 [2024-11-27 11:13:49.663347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:21.000 [2024-11-27 11:13:49.663356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:21.000 [2024-11-27 11:13:49.663363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:21.000 [2024-11-27 11:13:49.663372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:21.000 [2024-11-27 11:13:49.663379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:21.000 [2024-11-27 11:13:49.663390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:21.000 [2024-11-27 11:13:49.663398] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:21.000 [2024-11-27 11:13:49.663408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.000 [2024-11-27 11:13:49.663418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:21.001 [2024-11-27 11:13:49.663428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:21.001 [2024-11-27 11:13:49.663435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:21.001 [2024-11-27 11:13:49.663444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:21.001 [2024-11-27 11:13:49.663451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.663460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:21.001 [2024-11-27 11:13:49.663472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:18:21.001 [2024-11-27 11:13:49.663489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.677204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.677407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.001 [2024-11-27 11:13:49.677428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.653 ms 00:18:21.001 [2024-11-27 11:13:49.677438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.677567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.677582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:21.001 [2024-11-27 11:13:49.677594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:21.001 [2024-11-27 11:13:49.677604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.690502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.690552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.001 [2024-11-27 11:13:49.690564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.876 ms 00:18:21.001 [2024-11-27 11:13:49.690574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.690654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.690668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.001 [2024-11-27 11:13:49.690677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:21.001 [2024-11-27 11:13:49.690687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.691264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.691306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.001 [2024-11-27 11:13:49.691317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:18:21.001 [2024-11-27 11:13:49.691329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.691486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.691502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.001 [2024-11-27 11:13:49.691515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:21.001 [2024-11-27 11:13:49.691527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.717378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.717450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.001 [2024-11-27 11:13:49.717467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.823 ms 00:18:21.001 [2024-11-27 11:13:49.717486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.721800] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:21.001 [2024-11-27 11:13:49.721860] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:21.001 [2024-11-27 11:13:49.721875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.721888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:21.001 [2024-11-27 11:13:49.721922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:18:21.001 [2024-11-27 11:13:49.721934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.737835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.737900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:21.001 [2024-11-27 11:13:49.737914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.807 ms 00:18:21.001 [2024-11-27 11:13:49.737927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.740867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.740951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:21.001 [2024-11-27 11:13:49.740963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:18:21.001 [2024-11-27 11:13:49.740972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.743606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.743783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:21.001 [2024-11-27 11:13:49.743802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:18:21.001 [2024-11-27 11:13:49.743812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.744181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.744201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:21.001 [2024-11-27 11:13:49.744211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:21.001 [2024-11-27 11:13:49.744220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.768056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.768117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:21.001 [2024-11-27 11:13:49.768131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.808 ms 00:18:21.001 [2024-11-27 11:13:49.768145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.776254] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:21.001 [2024-11-27 11:13:49.795454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.795503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:21.001 [2024-11-27 11:13:49.795518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.212 ms 00:18:21.001 [2024-11-27 11:13:49.795526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.795621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.795632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:21.001 [2024-11-27 11:13:49.795644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:21.001 [2024-11-27 11:13:49.795655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.795713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.795723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:21.001 [2024-11-27 11:13:49.795737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:21.001 [2024-11-27 11:13:49.795744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.795772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.795781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:21.001 [2024-11-27 11:13:49.795797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:21.001 [2024-11-27 11:13:49.795805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.795844] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:21.001 [2024-11-27 11:13:49.795854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.795864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:21.001 [2024-11-27 11:13:49.795872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:21.001 [2024-11-27 11:13:49.795880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.802038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.802218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:21.001 [2024-11-27 11:13:49.802239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:18:21.001 [2024-11-27 11:13:49.802250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.802348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.001 [2024-11-27 11:13:49.802365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:21.001 [2024-11-27 11:13:49.802374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:21.001 [2024-11-27 11:13:49.802384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.001 [2024-11-27 11:13:49.803590] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:21.001 [2024-11-27 11:13:49.804998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.468 ms, result 0 00:18:21.001 [2024-11-27 11:13:49.806720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:21.001 Some configs were skipped because the RPC state that can call them passed over. 00:18:21.001 11:13:49 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:21.263 [2024-11-27 11:13:50.044941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.263 [2024-11-27 11:13:50.045159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:21.263 [2024-11-27 11:13:50.045230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:18:21.263 [2024-11-27 11:13:50.045255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.263 [2024-11-27 11:13:50.045314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.710 ms, result 0 00:18:21.263 true 00:18:21.263 11:13:50 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:21.524 [2024-11-27 11:13:50.264248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.524 [2024-11-27 11:13:50.264413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:21.524 [2024-11-27 11:13:50.264472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:18:21.524 [2024-11-27 11:13:50.264498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.524 [2024-11-27 11:13:50.264552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.654 ms, result 0 00:18:21.524 true 00:18:21.524 11:13:50 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85771 00:18:21.524 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85771 ']' 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85771 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85771 00:18:21.525 killing process with pid 85771 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85771' 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85771 00:18:21.525 11:13:50 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85771 00:18:21.788 [2024-11-27 11:13:50.440332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.440384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:21.788 [2024-11-27 11:13:50.440399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:21.788 [2024-11-27 11:13:50.440407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.440434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:21.788 [2024-11-27 11:13:50.441002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.441025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:21.788 [2024-11-27 11:13:50.441034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:21.788 [2024-11-27 11:13:50.441044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.441335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.441355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:21.788 [2024-11-27 11:13:50.441365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:21.788 [2024-11-27 11:13:50.441382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.445958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.446098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:21.788 [2024-11-27 11:13:50.446115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.557 ms 00:18:21.788 [2024-11-27 11:13:50.446125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.453218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.453343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:21.788 [2024-11-27 11:13:50.453359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.056 ms 00:18:21.788 [2024-11-27 11:13:50.453374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.455827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.455910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:21.788 [2024-11-27 11:13:50.455923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:18:21.788 [2024-11-27 11:13:50.455933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.459881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.459941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:21.788 [2024-11-27 11:13:50.459952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:18:21.788 [2024-11-27 11:13:50.459962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.788 [2024-11-27 11:13:50.460100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.788 [2024-11-27 11:13:50.460113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:21.788 [2024-11-27 11:13:50.460123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:21.788 [2024-11-27 11:13:50.460132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.789 [2024-11-27 11:13:50.462213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.789 [2024-11-27 11:13:50.462260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:21.789 [2024-11-27 11:13:50.462270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:18:21.789 [2024-11-27 11:13:50.462282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.789 [2024-11-27 11:13:50.464175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.789 [2024-11-27 11:13:50.464220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:21.789 [2024-11-27 11:13:50.464229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:18:21.789 [2024-11-27 11:13:50.464238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.789 [2024-11-27 11:13:50.465654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.789 [2024-11-27 11:13:50.465698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:21.789 [2024-11-27 11:13:50.465708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:18:21.789 [2024-11-27 11:13:50.465717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.789 [2024-11-27 11:13:50.467086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.789 [2024-11-27 11:13:50.467237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:21.789 [2024-11-27 11:13:50.467254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:18:21.789 [2024-11-27 11:13:50.467263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.789 [2024-11-27 11:13:50.467297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:21.789 [2024-11-27 11:13:50.467314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:21.789 [2024-11-27 11:13:50.467789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.467997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:21.790 [2024-11-27 11:13:50.468225] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:21.790 [2024-11-27 11:13:50.468233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:18:21.790 [2024-11-27 11:13:50.468242] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:21.790 [2024-11-27 11:13:50.468250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:21.790 [2024-11-27 11:13:50.468259] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:21.790 [2024-11-27 11:13:50.468269] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:21.790 [2024-11-27 11:13:50.468278] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:21.790 [2024-11-27 11:13:50.468285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:21.790 [2024-11-27 11:13:50.468297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:21.790 [2024-11-27 11:13:50.468303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:21.790 [2024-11-27 11:13:50.468331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:21.790 [2024-11-27 11:13:50.468339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.790 [2024-11-27 11:13:50.468351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:21.790 [2024-11-27 11:13:50.468359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:18:21.790 [2024-11-27 11:13:50.468371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.790 [2024-11-27 11:13:50.470244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.790 [2024-11-27 11:13:50.470276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:21.790 [2024-11-27 11:13:50.470286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:18:21.790 [2024-11-27 11:13:50.470296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.790 [2024-11-27 11:13:50.470409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.790 [2024-11-27 11:13:50.470420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:21.790 [2024-11-27 11:13:50.470429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:21.790 [2024-11-27 11:13:50.470438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.790 [2024-11-27 11:13:50.477288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.790 [2024-11-27 11:13:50.477426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.790 [2024-11-27 11:13:50.477480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.790 [2024-11-27 11:13:50.477505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.790 [2024-11-27 11:13:50.477607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.790 [2024-11-27 11:13:50.477640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.790 [2024-11-27 11:13:50.477659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.790 [2024-11-27 11:13:50.477683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.790 [2024-11-27 11:13:50.477817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.477846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.791 [2024-11-27 11:13:50.477867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.477902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.477939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.478003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.791 [2024-11-27 11:13:50.478026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.478047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.490597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.490742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.791 [2024-11-27 11:13:50.490794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.490819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.500361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.500512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.791 [2024-11-27 11:13:50.500528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.500541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.500590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.500602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.791 [2024-11-27 11:13:50.500611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.500623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.500656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.500667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.791 [2024-11-27 11:13:50.500675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.500685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.500760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.500772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.791 [2024-11-27 11:13:50.500780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.500792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.500824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.500836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:21.791 [2024-11-27 11:13:50.500844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.500855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.500943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.500962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.791 [2024-11-27 11:13:50.500971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.500981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.501033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.791 [2024-11-27 11:13:50.501049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.791 [2024-11-27 11:13:50.501056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.791 [2024-11-27 11:13:50.501068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.791 [2024-11-27 11:13:50.501215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.854 ms, result 0 00:18:22.052 11:13:50 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:22.052 11:13:50 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:22.052 [2024-11-27 11:13:50.801200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:22.052 [2024-11-27 11:13:50.801565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85813 ] 00:18:22.312 [2024-11-27 11:13:50.949409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.312 [2024-11-27 11:13:50.978336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.312 [2024-11-27 11:13:51.058377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.312 [2024-11-27 11:13:51.058427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.575 [2024-11-27 11:13:51.201606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.201653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.575 [2024-11-27 11:13:51.201665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.575 [2024-11-27 11:13:51.201673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.203914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.203945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.575 [2024-11-27 11:13:51.203957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:18:22.575 [2024-11-27 11:13:51.203964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.204030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.575 [2024-11-27 11:13:51.204249] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.575 [2024-11-27 11:13:51.204266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.204276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.575 [2024-11-27 11:13:51.204287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:22.575 [2024-11-27 11:13:51.204294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.205595] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.575 [2024-11-27 11:13:51.208132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.208261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.575 [2024-11-27 11:13:51.208279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:18:22.575 [2024-11-27 11:13:51.208288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.208345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.208355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.575 [2024-11-27 11:13:51.208367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:22.575 [2024-11-27 11:13:51.208374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.213229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.213258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.575 [2024-11-27 11:13:51.213268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:18:22.575 [2024-11-27 11:13:51.213275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.213382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.213393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.575 [2024-11-27 11:13:51.213405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:22.575 [2024-11-27 11:13:51.213412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.213437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.213449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.575 [2024-11-27 11:13:51.213457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:22.575 [2024-11-27 11:13:51.213464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.575 [2024-11-27 11:13:51.213485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:22.575 [2024-11-27 11:13:51.214764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.575 [2024-11-27 11:13:51.214787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.575 [2024-11-27 11:13:51.214801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:18:22.575 [2024-11-27 11:13:51.214811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.576 [2024-11-27 11:13:51.214843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.576 [2024-11-27 11:13:51.214853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.576 [2024-11-27 11:13:51.214860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.576 [2024-11-27 11:13:51.214869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.576 [2024-11-27 11:13:51.214885] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.576 [2024-11-27 11:13:51.214923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.576 [2024-11-27 11:13:51.214956] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.576 [2024-11-27 11:13:51.214974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.576 [2024-11-27 11:13:51.215076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.576 [2024-11-27 11:13:51.215085] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.576 [2024-11-27 11:13:51.215095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.576 [2024-11-27 11:13:51.215108] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215117] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215125] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:22.576 [2024-11-27 11:13:51.215132] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.576 [2024-11-27 11:13:51.215138] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.576 [2024-11-27 11:13:51.215149] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.576 [2024-11-27 11:13:51.215156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.576 [2024-11-27 11:13:51.215163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.576 [2024-11-27 11:13:51.215173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:22.576 [2024-11-27 11:13:51.215183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.576 [2024-11-27 11:13:51.215269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.576 [2024-11-27 11:13:51.215277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.576 [2024-11-27 11:13:51.215284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:22.576 [2024-11-27 11:13:51.215291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.576 [2024-11-27 11:13:51.215393] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.576 [2024-11-27 11:13:51.215407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.576 [2024-11-27 11:13:51.215416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.576 [2024-11-27 11:13:51.215443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.576 [2024-11-27 11:13:51.215465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.576 [2024-11-27 11:13:51.215482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.576 [2024-11-27 11:13:51.215489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:22.576 [2024-11-27 11:13:51.215497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.576 [2024-11-27 11:13:51.215505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.576 [2024-11-27 11:13:51.215513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:22.576 [2024-11-27 11:13:51.215521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.576 [2024-11-27 11:13:51.215536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.576 [2024-11-27 11:13:51.215558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.576 [2024-11-27 11:13:51.215580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.576 [2024-11-27 11:13:51.215608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.576 [2024-11-27 11:13:51.215630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.576 [2024-11-27 11:13:51.215652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.576 [2024-11-27 11:13:51.215666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.576 [2024-11-27 11:13:51.215674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:22.576 [2024-11-27 11:13:51.215681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.576 [2024-11-27 11:13:51.215688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.576 [2024-11-27 11:13:51.215696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:22.576 [2024-11-27 11:13:51.215703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.576 [2024-11-27 11:13:51.215718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:22.576 [2024-11-27 11:13:51.215727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215735] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.576 [2024-11-27 11:13:51.215743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.576 [2024-11-27 11:13:51.215751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.576 [2024-11-27 11:13:51.215768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.576 [2024-11-27 11:13:51.215776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.576 [2024-11-27 11:13:51.215783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.576 [2024-11-27 11:13:51.215790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.576 [2024-11-27 11:13:51.215798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.576 [2024-11-27 11:13:51.215805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.576 [2024-11-27 11:13:51.215814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.576 [2024-11-27 11:13:51.215823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:22.576 [2024-11-27 11:13:51.215841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:22.576 [2024-11-27 11:13:51.215850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:22.576 [2024-11-27 11:13:51.215860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:22.576 [2024-11-27 11:13:51.215868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:22.576 [2024-11-27 11:13:51.215876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:22.576 [2024-11-27 11:13:51.215884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:22.576 [2024-11-27 11:13:51.215904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:22.576 [2024-11-27 11:13:51.215913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:22.576 [2024-11-27 11:13:51.215920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:22.576 [2024-11-27 11:13:51.215954] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.576 [2024-11-27 11:13:51.215962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.576 [2024-11-27 11:13:51.215977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.577 [2024-11-27 11:13:51.215984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.577 [2024-11-27 11:13:51.215994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.577 [2024-11-27 11:13:51.216001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.216012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.577 [2024-11-27 11:13:51.216021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:18:22.577 [2024-11-27 11:13:51.216028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.231944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.231983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.577 [2024-11-27 11:13:51.231994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.865 ms 00:18:22.577 [2024-11-27 11:13:51.232008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.232138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.232149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.577 [2024-11-27 11:13:51.232161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:22.577 [2024-11-27 11:13:51.232171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.239975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.240007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.577 [2024-11-27 11:13:51.240016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.783 ms 00:18:22.577 [2024-11-27 11:13:51.240024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.240066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.240075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.577 [2024-11-27 11:13:51.240086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:22.577 [2024-11-27 11:13:51.240095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.240400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.240413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.577 [2024-11-27 11:13:51.240422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:22.577 [2024-11-27 11:13:51.240429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.240556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.240565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.577 [2024-11-27 11:13:51.240574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:22.577 [2024-11-27 11:13:51.240584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.245333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.245369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.577 [2024-11-27 11:13:51.245379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:18:22.577 [2024-11-27 11:13:51.245387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.248140] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:22.577 [2024-11-27 11:13:51.248179] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.577 [2024-11-27 11:13:51.248190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.248198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.577 [2024-11-27 11:13:51.248206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:18:22.577 [2024-11-27 11:13:51.248212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.262766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.262800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.577 [2024-11-27 11:13:51.262810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.506 ms 00:18:22.577 [2024-11-27 11:13:51.262817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.264859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.264910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.577 [2024-11-27 11:13:51.264919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:18:22.577 [2024-11-27 11:13:51.264925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.266852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.266993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.577 [2024-11-27 11:13:51.267010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.890 ms 00:18:22.577 [2024-11-27 11:13:51.267017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.267346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.267359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.577 [2024-11-27 11:13:51.267368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:22.577 [2024-11-27 11:13:51.267377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.283195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.283242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.577 [2024-11-27 11:13:51.283253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.798 ms 00:18:22.577 [2024-11-27 11:13:51.283261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.290608] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.577 [2024-11-27 11:13:51.304487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.304527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.577 [2024-11-27 11:13:51.304538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.172 ms 00:18:22.577 [2024-11-27 11:13:51.304545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.304631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.304641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.577 [2024-11-27 11:13:51.304650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:22.577 [2024-11-27 11:13:51.304657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.304705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.304714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.577 [2024-11-27 11:13:51.304725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:22.577 [2024-11-27 11:13:51.304733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.304753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.304760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.577 [2024-11-27 11:13:51.304769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.577 [2024-11-27 11:13:51.304776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.304807] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.577 [2024-11-27 11:13:51.304824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.304831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.577 [2024-11-27 11:13:51.304838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:22.577 [2024-11-27 11:13:51.304845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.308960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.309029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.577 [2024-11-27 11:13:51.309039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.093 ms 00:18:22.577 [2024-11-27 11:13:51.309047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.309131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.577 [2024-11-27 11:13:51.309144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.577 [2024-11-27 11:13:51.309152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:22.577 [2024-11-27 11:13:51.309159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.577 [2024-11-27 11:13:51.310324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.577 [2024-11-27 11:13:51.311331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.444 ms, result 0 00:18:22.577 [2024-11-27 11:13:51.312318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.577 [2024-11-27 11:13:51.321433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.522  [2024-11-27T11:13:53.344Z] Copying: 20/256 [MB] (20 MBps) [2024-11-27T11:13:54.732Z] Copying: 34/256 [MB] (13 MBps) [2024-11-27T11:13:55.678Z] Copying: 49/256 [MB] (14 MBps) [2024-11-27T11:13:56.621Z] Copying: 72/256 [MB] (23 MBps) [2024-11-27T11:13:57.562Z] Copying: 94/256 [MB] (22 MBps) [2024-11-27T11:13:58.506Z] Copying: 117/256 [MB] (23 MBps) [2024-11-27T11:13:59.450Z] Copying: 134/256 [MB] (17 MBps) [2024-11-27T11:14:00.437Z] Copying: 150/256 [MB] (15 MBps) [2024-11-27T11:14:01.380Z] Copying: 163/256 [MB] (13 MBps) [2024-11-27T11:14:02.325Z] Copying: 178/256 [MB] (14 MBps) [2024-11-27T11:14:03.754Z] Copying: 198/256 [MB] (20 MBps) [2024-11-27T11:14:04.325Z] Copying: 209/256 [MB] (10 MBps) [2024-11-27T11:14:05.711Z] Copying: 219/256 [MB] (10 MBps) [2024-11-27T11:14:06.652Z] Copying: 230/256 [MB] (10 MBps) [2024-11-27T11:14:07.596Z] Copying: 240/256 [MB] (10 MBps) [2024-11-27T11:14:07.596Z] Copying: 252/256 [MB] (11 MBps) [2024-11-27T11:14:07.596Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-27 11:14:07.574675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.713 [2024-11-27 11:14:07.576642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.713 [2024-11-27 11:14:07.576681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:38.713 [2024-11-27 11:14:07.576705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:38.713 [2024-11-27 11:14:07.576714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.713 [2024-11-27 11:14:07.576736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:38.713 [2024-11-27 11:14:07.577462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.713 [2024-11-27 11:14:07.577488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:38.713 [2024-11-27 11:14:07.577500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:18:38.713 [2024-11-27 11:14:07.577509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.713 [2024-11-27 11:14:07.577777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.713 [2024-11-27 11:14:07.577879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:38.713 [2024-11-27 11:14:07.577921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:38.713 [2024-11-27 11:14:07.577932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.713 [2024-11-27 11:14:07.581998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.713 [2024-11-27 11:14:07.582020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:38.713 [2024-11-27 11:14:07.582031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.038 ms 00:18:38.713 [2024-11-27 11:14:07.582040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.713 [2024-11-27 11:14:07.589033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.713 [2024-11-27 11:14:07.589230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:38.713 [2024-11-27 11:14:07.589251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.960 ms 00:18:38.713 [2024-11-27 11:14:07.589260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.713 [2024-11-27 11:14:07.592174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.713 [2024-11-27 11:14:07.592226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:38.714 [2024-11-27 11:14:07.592236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.843 ms 00:18:38.714 [2024-11-27 11:14:07.592256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.597039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.977 [2024-11-27 11:14:07.597091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:38.977 [2024-11-27 11:14:07.597110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:18:38.977 [2024-11-27 11:14:07.597117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.597255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.977 [2024-11-27 11:14:07.597266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:38.977 [2024-11-27 11:14:07.597281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:38.977 [2024-11-27 11:14:07.597289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.600483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.977 [2024-11-27 11:14:07.600533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:38.977 [2024-11-27 11:14:07.600543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.175 ms 00:18:38.977 [2024-11-27 11:14:07.600550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.603325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.977 [2024-11-27 11:14:07.603513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:38.977 [2024-11-27 11:14:07.603532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:18:38.977 [2024-11-27 11:14:07.603539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.606022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.977 [2024-11-27 11:14:07.606070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:38.977 [2024-11-27 11:14:07.606080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.390 ms 00:18:38.977 [2024-11-27 11:14:07.606087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.608325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.977 [2024-11-27 11:14:07.608375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:38.977 [2024-11-27 11:14:07.608384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:18:38.977 [2024-11-27 11:14:07.608391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.977 [2024-11-27 11:14:07.608484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:38.977 [2024-11-27 11:14:07.608510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:38.977 [2024-11-27 11:14:07.608886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.608998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:38.978 [2024-11-27 11:14:07.609347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:38.978 [2024-11-27 11:14:07.609355] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:18:38.978 [2024-11-27 11:14:07.609372] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:38.978 [2024-11-27 11:14:07.609380] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:38.978 [2024-11-27 11:14:07.609387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:38.978 [2024-11-27 11:14:07.609395] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:38.978 [2024-11-27 11:14:07.609402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:38.978 [2024-11-27 11:14:07.609411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:38.978 [2024-11-27 11:14:07.609418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:38.978 [2024-11-27 11:14:07.609425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:38.978 [2024-11-27 11:14:07.609432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:38.978 [2024-11-27 11:14:07.609439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.978 [2024-11-27 11:14:07.609452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:38.978 [2024-11-27 11:14:07.609467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:18:38.978 [2024-11-27 11:14:07.609475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.611780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.978 [2024-11-27 11:14:07.611813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:38.978 [2024-11-27 11:14:07.611823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:18:38.978 [2024-11-27 11:14:07.611831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.611979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.978 [2024-11-27 11:14:07.611996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:38.978 [2024-11-27 11:14:07.612005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:38.978 [2024-11-27 11:14:07.612018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.619592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.978 [2024-11-27 11:14:07.619644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.978 [2024-11-27 11:14:07.619656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.978 [2024-11-27 11:14:07.619672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.619770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.978 [2024-11-27 11:14:07.619784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.978 [2024-11-27 11:14:07.619796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.978 [2024-11-27 11:14:07.619804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.619851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.978 [2024-11-27 11:14:07.619864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.978 [2024-11-27 11:14:07.619872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.978 [2024-11-27 11:14:07.619880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.619926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.978 [2024-11-27 11:14:07.619936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.978 [2024-11-27 11:14:07.619946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.978 [2024-11-27 11:14:07.619954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.633509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.978 [2024-11-27 11:14:07.633566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.978 [2024-11-27 11:14:07.633578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.978 [2024-11-27 11:14:07.633587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.978 [2024-11-27 11:14:07.644170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.978 [2024-11-27 11:14:07.644225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.978 [2024-11-27 11:14:07.644237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.979 [2024-11-27 11:14:07.644304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.979 [2024-11-27 11:14:07.644312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.979 [2024-11-27 11:14:07.644371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.979 [2024-11-27 11:14:07.644379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.979 [2024-11-27 11:14:07.644469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.979 [2024-11-27 11:14:07.644478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.979 [2024-11-27 11:14:07.644533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.979 [2024-11-27 11:14:07.644540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.979 [2024-11-27 11:14:07.644600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.979 [2024-11-27 11:14:07.644609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.979 [2024-11-27 11:14:07.644675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.979 [2024-11-27 11:14:07.644683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.979 [2024-11-27 11:14:07.644694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.979 [2024-11-27 11:14:07.644841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.169 ms, result 0 00:18:39.240 00:18:39.240 00:18:39.240 11:14:07 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:39.240 11:14:07 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:39.813 11:14:08 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.813 [2024-11-27 11:14:08.519310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:39.813 [2024-11-27 11:14:08.519438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86004 ] 00:18:39.814 [2024-11-27 11:14:08.667975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.075 [2024-11-27 11:14:08.720774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.075 [2024-11-27 11:14:08.830483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.075 [2024-11-27 11:14:08.830562] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.338 [2024-11-27 11:14:08.991701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:08.991765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.338 [2024-11-27 11:14:08.991780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.338 [2024-11-27 11:14:08.991793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.338 [2024-11-27 11:14:08.994426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:08.994482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.338 [2024-11-27 11:14:08.994496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:18:40.338 [2024-11-27 11:14:08.994504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.338 [2024-11-27 11:14:08.994618] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.338 [2024-11-27 11:14:08.994880] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.338 [2024-11-27 11:14:08.994917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:08.994930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.338 [2024-11-27 11:14:08.994943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:18:40.338 [2024-11-27 11:14:08.994954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.338 [2024-11-27 11:14:08.996946] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.338 [2024-11-27 11:14:09.000937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:09.000989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.338 [2024-11-27 11:14:09.001001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:18:40.338 [2024-11-27 11:14:09.001022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.338 [2024-11-27 11:14:09.001114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:09.001124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.338 [2024-11-27 11:14:09.001133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:40.338 [2024-11-27 11:14:09.001141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.338 [2024-11-27 11:14:09.009568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:09.009615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.338 [2024-11-27 11:14:09.009626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.379 ms 00:18:40.338 [2024-11-27 11:14:09.009634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.338 [2024-11-27 11:14:09.009783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.338 [2024-11-27 11:14:09.009796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.338 [2024-11-27 11:14:09.009805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:40.339 [2024-11-27 11:14:09.009813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.339 [2024-11-27 11:14:09.009845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.339 [2024-11-27 11:14:09.009858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.339 [2024-11-27 11:14:09.009866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:40.339 [2024-11-27 11:14:09.009878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.339 [2024-11-27 11:14:09.009929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:40.339 [2024-11-27 11:14:09.012015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.339 [2024-11-27 11:14:09.012048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.339 [2024-11-27 11:14:09.012057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:18:40.339 [2024-11-27 11:14:09.012065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.339 [2024-11-27 11:14:09.012108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.339 [2024-11-27 11:14:09.012124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.339 [2024-11-27 11:14:09.012135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:40.339 [2024-11-27 11:14:09.012143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.339 [2024-11-27 11:14:09.012162] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.339 [2024-11-27 11:14:09.012181] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:40.339 [2024-11-27 11:14:09.012218] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.339 [2024-11-27 11:14:09.012239] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:40.339 [2024-11-27 11:14:09.012346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.339 [2024-11-27 11:14:09.012357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.339 [2024-11-27 11:14:09.012368] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.339 [2024-11-27 11:14:09.012378] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012388] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012396] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:40.339 [2024-11-27 11:14:09.012409] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.339 [2024-11-27 11:14:09.012417] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.339 [2024-11-27 11:14:09.012424] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.339 [2024-11-27 11:14:09.012433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.339 [2024-11-27 11:14:09.012446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.339 [2024-11-27 11:14:09.012457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:40.339 [2024-11-27 11:14:09.012464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.339 [2024-11-27 11:14:09.012552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.339 [2024-11-27 11:14:09.012562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.339 [2024-11-27 11:14:09.012570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:40.339 [2024-11-27 11:14:09.012580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.339 [2024-11-27 11:14:09.012686] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.339 [2024-11-27 11:14:09.012702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.339 [2024-11-27 11:14:09.012712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.339 [2024-11-27 11:14:09.012740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.339 [2024-11-27 11:14:09.012779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.339 [2024-11-27 11:14:09.012797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.339 [2024-11-27 11:14:09.012805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:40.339 [2024-11-27 11:14:09.012812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.339 [2024-11-27 11:14:09.012820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.339 [2024-11-27 11:14:09.012829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:40.339 [2024-11-27 11:14:09.012837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.339 [2024-11-27 11:14:09.012854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.339 [2024-11-27 11:14:09.012879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.339 [2024-11-27 11:14:09.012934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.339 [2024-11-27 11:14:09.012963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.339 [2024-11-27 11:14:09.012979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.339 [2024-11-27 11:14:09.012987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:40.339 [2024-11-27 11:14:09.012995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.339 [2024-11-27 11:14:09.013003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.339 [2024-11-27 11:14:09.013011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:40.339 [2024-11-27 11:14:09.013019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.339 [2024-11-27 11:14:09.013029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.339 [2024-11-27 11:14:09.013037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:40.339 [2024-11-27 11:14:09.013045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.339 [2024-11-27 11:14:09.013053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.339 [2024-11-27 11:14:09.013060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:40.339 [2024-11-27 11:14:09.013067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.013076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.339 [2024-11-27 11:14:09.013084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:40.339 [2024-11-27 11:14:09.013091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.013098] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.339 [2024-11-27 11:14:09.013107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.339 [2024-11-27 11:14:09.013115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.339 [2024-11-27 11:14:09.013122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.339 [2024-11-27 11:14:09.013131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.339 [2024-11-27 11:14:09.013138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.339 [2024-11-27 11:14:09.013146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.339 [2024-11-27 11:14:09.013154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.339 [2024-11-27 11:14:09.013160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.340 [2024-11-27 11:14:09.013167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.340 [2024-11-27 11:14:09.013176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.340 [2024-11-27 11:14:09.013185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:40.340 [2024-11-27 11:14:09.013205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:40.340 [2024-11-27 11:14:09.013212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:40.340 [2024-11-27 11:14:09.013220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:40.340 [2024-11-27 11:14:09.013227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:40.340 [2024-11-27 11:14:09.013234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:40.340 [2024-11-27 11:14:09.013241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:40.340 [2024-11-27 11:14:09.013249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:40.340 [2024-11-27 11:14:09.013256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:40.340 [2024-11-27 11:14:09.013263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:40.340 [2024-11-27 11:14:09.013301] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.340 [2024-11-27 11:14:09.013310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.340 [2024-11-27 11:14:09.013330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.340 [2024-11-27 11:14:09.013338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.340 [2024-11-27 11:14:09.013345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.340 [2024-11-27 11:14:09.013353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.013361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.340 [2024-11-27 11:14:09.013371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:18:40.340 [2024-11-27 11:14:09.013379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.038666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.038877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.340 [2024-11-27 11:14:09.039119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.233 ms 00:18:40.340 [2024-11-27 11:14:09.039167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.039369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.039425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.340 [2024-11-27 11:14:09.039528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:40.340 [2024-11-27 11:14:09.039566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.051440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.051603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.340 [2024-11-27 11:14:09.051665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.829 ms 00:18:40.340 [2024-11-27 11:14:09.051691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.051798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.051835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.340 [2024-11-27 11:14:09.051864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:40.340 [2024-11-27 11:14:09.051997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.052495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.052562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.340 [2024-11-27 11:14:09.052589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:18:40.340 [2024-11-27 11:14:09.052612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.052864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.052951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.340 [2024-11-27 11:14:09.053162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:40.340 [2024-11-27 11:14:09.053218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.060016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.060162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.340 [2024-11-27 11:14:09.060220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.745 ms 00:18:40.340 [2024-11-27 11:14:09.060242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.063850] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:40.340 [2024-11-27 11:14:09.064041] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.340 [2024-11-27 11:14:09.064104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.064124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.340 [2024-11-27 11:14:09.064144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:18:40.340 [2024-11-27 11:14:09.064162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.079591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.079740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.340 [2024-11-27 11:14:09.079808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.367 ms 00:18:40.340 [2024-11-27 11:14:09.079832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.082685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.082851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.340 [2024-11-27 11:14:09.082928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:18:40.340 [2024-11-27 11:14:09.082952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.086235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.086425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.340 [2024-11-27 11:14:09.086503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:18:40.340 [2024-11-27 11:14:09.086527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.086877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.087157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.340 [2024-11-27 11:14:09.087208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:40.340 [2024-11-27 11:14:09.087227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.108991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.109190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.340 [2024-11-27 11:14:09.109250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.714 ms 00:18:40.340 [2024-11-27 11:14:09.109273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.340 [2024-11-27 11:14:09.117328] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:40.340 [2024-11-27 11:14:09.135446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.340 [2024-11-27 11:14:09.135605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.341 [2024-11-27 11:14:09.135659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:18:40.341 [2024-11-27 11:14:09.135683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.135785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-11-27 11:14:09.135813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.341 [2024-11-27 11:14:09.135834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:40.341 [2024-11-27 11:14:09.135854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.135947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-11-27 11:14:09.136021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.341 [2024-11-27 11:14:09.136032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:40.341 [2024-11-27 11:14:09.136040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.136073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-11-27 11:14:09.136083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.341 [2024-11-27 11:14:09.136092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.341 [2024-11-27 11:14:09.136104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.136142] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.341 [2024-11-27 11:14:09.136155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-11-27 11:14:09.136170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.341 [2024-11-27 11:14:09.136179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:40.341 [2024-11-27 11:14:09.136186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.141846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-11-27 11:14:09.142020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.341 [2024-11-27 11:14:09.142077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.634 ms 00:18:40.341 [2024-11-27 11:14:09.142100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.142195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.341 [2024-11-27 11:14:09.142223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.341 [2024-11-27 11:14:09.142244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:40.341 [2024-11-27 11:14:09.142263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.341 [2024-11-27 11:14:09.143270] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.341 [2024-11-27 11:14:09.144731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.258 ms, result 0 00:18:40.341 [2024-11-27 11:14:09.146321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:40.341 [2024-11-27 11:14:09.153423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.602  [2024-11-27T11:14:09.485Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-11-27 11:14:09.447836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:40.602 [2024-11-27 11:14:09.449362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.449533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:40.602 [2024-11-27 11:14:09.449611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.602 [2024-11-27 11:14:09.449636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.449676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:40.602 [2024-11-27 11:14:09.450422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.450585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:40.602 [2024-11-27 11:14:09.450651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:18:40.602 [2024-11-27 11:14:09.450676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.452772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.452978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:40.602 [2024-11-27 11:14:09.453086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:18:40.602 [2024-11-27 11:14:09.453112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.457621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.457766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:40.602 [2024-11-27 11:14:09.457827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.463 ms 00:18:40.602 [2024-11-27 11:14:09.457853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.464878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.465059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:40.602 [2024-11-27 11:14:09.465126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.950 ms 00:18:40.602 [2024-11-27 11:14:09.465137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.468034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.468086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:40.602 [2024-11-27 11:14:09.468096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:18:40.602 [2024-11-27 11:14:09.468116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.472722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.472779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:40.602 [2024-11-27 11:14:09.472799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.555 ms 00:18:40.602 [2024-11-27 11:14:09.472806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.472984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.472996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:40.602 [2024-11-27 11:14:09.473005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:18:40.602 [2024-11-27 11:14:09.473014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.476469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.476521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:40.602 [2024-11-27 11:14:09.476531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:18:40.602 [2024-11-27 11:14:09.476538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.479697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.479866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:40.602 [2024-11-27 11:14:09.479883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:18:40.602 [2024-11-27 11:14:09.479910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.602 [2024-11-27 11:14:09.482288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.602 [2024-11-27 11:14:09.482337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:40.602 [2024-11-27 11:14:09.482348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:18:40.602 [2024-11-27 11:14:09.482356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.864 [2024-11-27 11:14:09.484803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.864 [2024-11-27 11:14:09.484856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:40.864 [2024-11-27 11:14:09.484868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:18:40.864 [2024-11-27 11:14:09.484875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.864 [2024-11-27 11:14:09.484959] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:40.865 [2024-11-27 11:14:09.484984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:40.865 [2024-11-27 11:14:09.485534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:40.866 [2024-11-27 11:14:09.485771] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:40.866 [2024-11-27 11:14:09.485780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:18:40.866 [2024-11-27 11:14:09.485795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:40.866 [2024-11-27 11:14:09.485803] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:40.866 [2024-11-27 11:14:09.485810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:40.866 [2024-11-27 11:14:09.485818] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:40.866 [2024-11-27 11:14:09.485826] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:40.866 [2024-11-27 11:14:09.485834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:40.866 [2024-11-27 11:14:09.485842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:40.866 [2024-11-27 11:14:09.485848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:40.866 [2024-11-27 11:14:09.485855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:40.866 [2024-11-27 11:14:09.485863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.866 [2024-11-27 11:14:09.485870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:40.866 [2024-11-27 11:14:09.485882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:18:40.866 [2024-11-27 11:14:09.485912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.488188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.866 [2024-11-27 11:14:09.488222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:40.866 [2024-11-27 11:14:09.488233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:18:40.866 [2024-11-27 11:14:09.488243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.488402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.866 [2024-11-27 11:14:09.488429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:40.866 [2024-11-27 11:14:09.488442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:40.866 [2024-11-27 11:14:09.488450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.496197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.496247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.866 [2024-11-27 11:14:09.496258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.496266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.496339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.496350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.866 [2024-11-27 11:14:09.496358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.496366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.496417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.496426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.866 [2024-11-27 11:14:09.496434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.496442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.496458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.496466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.866 [2024-11-27 11:14:09.496477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.496484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.510577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.510633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.866 [2024-11-27 11:14:09.510645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.510653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.521204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.521263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.866 [2024-11-27 11:14:09.521275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.521283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.521333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.521342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.866 [2024-11-27 11:14:09.521351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.521359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.521388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.521405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.866 [2024-11-27 11:14:09.521414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.866 [2024-11-27 11:14:09.521424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.866 [2024-11-27 11:14:09.521497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.866 [2024-11-27 11:14:09.521508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.867 [2024-11-27 11:14:09.521517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.867 [2024-11-27 11:14:09.521525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.867 [2024-11-27 11:14:09.521555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.867 [2024-11-27 11:14:09.521565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:40.867 [2024-11-27 11:14:09.521573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.867 [2024-11-27 11:14:09.521581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.867 [2024-11-27 11:14:09.521625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.867 [2024-11-27 11:14:09.521636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.867 [2024-11-27 11:14:09.521645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.867 [2024-11-27 11:14:09.521653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.867 [2024-11-27 11:14:09.521699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:40.867 [2024-11-27 11:14:09.521713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.867 [2024-11-27 11:14:09.521722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:40.867 [2024-11-27 11:14:09.521732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.867 [2024-11-27 11:14:09.521878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.486 ms, result 0 00:18:40.867 00:18:40.867 00:18:41.128 11:14:09 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:41.128 11:14:09 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86018 00:18:41.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.128 11:14:09 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86018 00:18:41.128 11:14:09 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86018 ']' 00:18:41.128 11:14:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.128 11:14:09 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.128 11:14:09 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.128 11:14:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.128 11:14:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:41.128 [2024-11-27 11:14:09.834353] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:41.128 [2024-11-27 11:14:09.834500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86018 ] 00:18:41.128 [2024-11-27 11:14:09.986336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.389 [2024-11-27 11:14:10.037418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.962 11:14:10 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.962 11:14:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:41.962 11:14:10 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:42.224 [2024-11-27 11:14:10.874901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:42.224 [2024-11-27 11:14:10.874981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:42.224 [2024-11-27 11:14:11.043186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.043431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:42.224 [2024-11-27 11:14:11.043457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:42.224 [2024-11-27 11:14:11.043468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.046069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.046127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.224 [2024-11-27 11:14:11.046138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:18:42.224 [2024-11-27 11:14:11.046147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.046258] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:42.224 [2024-11-27 11:14:11.046532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:42.224 [2024-11-27 11:14:11.046554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.046565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.224 [2024-11-27 11:14:11.046575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:42.224 [2024-11-27 11:14:11.046588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.048580] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:42.224 [2024-11-27 11:14:11.052671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.052865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:42.224 [2024-11-27 11:14:11.053101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.087 ms 00:18:42.224 [2024-11-27 11:14:11.053136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.053234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.053460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:42.224 [2024-11-27 11:14:11.053493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:42.224 [2024-11-27 11:14:11.053515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.061859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.061921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.224 [2024-11-27 11:14:11.061934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.280 ms 00:18:42.224 [2024-11-27 11:14:11.061942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.062097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.062111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.224 [2024-11-27 11:14:11.062122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:42.224 [2024-11-27 11:14:11.062130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.062161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.062170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:42.224 [2024-11-27 11:14:11.062180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:42.224 [2024-11-27 11:14:11.062190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.062218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:42.224 [2024-11-27 11:14:11.064234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.064276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.224 [2024-11-27 11:14:11.064286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:18:42.224 [2024-11-27 11:14:11.064295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.064341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.224 [2024-11-27 11:14:11.064352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:42.224 [2024-11-27 11:14:11.064361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:42.224 [2024-11-27 11:14:11.064369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.224 [2024-11-27 11:14:11.064395] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:42.224 [2024-11-27 11:14:11.064417] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:42.224 [2024-11-27 11:14:11.064457] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:42.224 [2024-11-27 11:14:11.064477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:42.224 [2024-11-27 11:14:11.064582] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:42.224 [2024-11-27 11:14:11.064595] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:42.224 [2024-11-27 11:14:11.064606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:42.224 [2024-11-27 11:14:11.064621] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:42.224 [2024-11-27 11:14:11.064630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:42.224 [2024-11-27 11:14:11.064643] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:42.225 [2024-11-27 11:14:11.064654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:42.225 [2024-11-27 11:14:11.064664] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:42.225 [2024-11-27 11:14:11.064672] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:42.225 [2024-11-27 11:14:11.064682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.225 [2024-11-27 11:14:11.064692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:42.225 [2024-11-27 11:14:11.064702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:42.225 [2024-11-27 11:14:11.064710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.225 [2024-11-27 11:14:11.064798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.225 [2024-11-27 11:14:11.064807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:42.225 [2024-11-27 11:14:11.064820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:42.225 [2024-11-27 11:14:11.064827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.225 [2024-11-27 11:14:11.064968] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:42.225 [2024-11-27 11:14:11.064981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:42.225 [2024-11-27 11:14:11.065000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:42.225 [2024-11-27 11:14:11.065034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:42.225 [2024-11-27 11:14:11.065069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:42.225 [2024-11-27 11:14:11.065088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:42.225 [2024-11-27 11:14:11.065096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:42.225 [2024-11-27 11:14:11.065105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:42.225 [2024-11-27 11:14:11.065113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:42.225 [2024-11-27 11:14:11.065123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:42.225 [2024-11-27 11:14:11.065131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:42.225 [2024-11-27 11:14:11.065148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:42.225 [2024-11-27 11:14:11.065178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:42.225 [2024-11-27 11:14:11.065207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:42.225 [2024-11-27 11:14:11.065242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:42.225 [2024-11-27 11:14:11.065268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:42.225 [2024-11-27 11:14:11.065295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:42.225 [2024-11-27 11:14:11.065310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:42.225 [2024-11-27 11:14:11.065316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:42.225 [2024-11-27 11:14:11.065327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:42.225 [2024-11-27 11:14:11.065333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:42.225 [2024-11-27 11:14:11.065342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:42.225 [2024-11-27 11:14:11.065348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:42.225 [2024-11-27 11:14:11.065363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:42.225 [2024-11-27 11:14:11.065372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:42.225 [2024-11-27 11:14:11.065387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:42.225 [2024-11-27 11:14:11.065395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:42.225 [2024-11-27 11:14:11.065415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:42.225 [2024-11-27 11:14:11.065423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:42.225 [2024-11-27 11:14:11.065429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:42.225 [2024-11-27 11:14:11.065440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:42.225 [2024-11-27 11:14:11.065446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:42.225 [2024-11-27 11:14:11.065456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:42.225 [2024-11-27 11:14:11.065465] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:42.225 [2024-11-27 11:14:11.065477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:42.225 [2024-11-27 11:14:11.065495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:42.225 [2024-11-27 11:14:11.065502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:42.225 [2024-11-27 11:14:11.065511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:42.225 [2024-11-27 11:14:11.065518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:42.225 [2024-11-27 11:14:11.065527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:42.225 [2024-11-27 11:14:11.065535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:42.225 [2024-11-27 11:14:11.065544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:42.225 [2024-11-27 11:14:11.065552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:42.225 [2024-11-27 11:14:11.065562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:42.225 [2024-11-27 11:14:11.065603] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:42.225 [2024-11-27 11:14:11.065614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:42.225 [2024-11-27 11:14:11.065633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:42.225 [2024-11-27 11:14:11.065640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:42.225 [2024-11-27 11:14:11.065650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:42.225 [2024-11-27 11:14:11.065658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.225 [2024-11-27 11:14:11.065668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:42.225 [2024-11-27 11:14:11.065675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:18:42.225 [2024-11-27 11:14:11.065684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.225 [2024-11-27 11:14:11.079405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.225 [2024-11-27 11:14:11.079454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.225 [2024-11-27 11:14:11.079466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.662 ms 00:18:42.225 [2024-11-27 11:14:11.079476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.225 [2024-11-27 11:14:11.079610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.225 [2024-11-27 11:14:11.079625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:42.225 [2024-11-27 11:14:11.079640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:42.225 [2024-11-27 11:14:11.079649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.225 [2024-11-27 11:14:11.090856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.226 [2024-11-27 11:14:11.090924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.226 [2024-11-27 11:14:11.090935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.185 ms 00:18:42.226 [2024-11-27 11:14:11.090945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.226 [2024-11-27 11:14:11.091008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.226 [2024-11-27 11:14:11.091023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.226 [2024-11-27 11:14:11.091031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:42.226 [2024-11-27 11:14:11.091041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.226 [2024-11-27 11:14:11.091519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.226 [2024-11-27 11:14:11.091551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.226 [2024-11-27 11:14:11.091562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:18:42.226 [2024-11-27 11:14:11.091578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.226 [2024-11-27 11:14:11.091721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.226 [2024-11-27 11:14:11.091736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.226 [2024-11-27 11:14:11.091747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:42.226 [2024-11-27 11:14:11.091758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.105749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.105968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.487 [2024-11-27 11:14:11.105991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.966 ms 00:18:42.487 [2024-11-27 11:14:11.106003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.109660] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:42.487 [2024-11-27 11:14:11.109840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:42.487 [2024-11-27 11:14:11.109860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.109871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:42.487 [2024-11-27 11:14:11.109882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:18:42.487 [2024-11-27 11:14:11.109915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.126344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.126409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:42.487 [2024-11-27 11:14:11.126424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.062 ms 00:18:42.487 [2024-11-27 11:14:11.126437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.129627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.129685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:42.487 [2024-11-27 11:14:11.129696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:18:42.487 [2024-11-27 11:14:11.129705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.132474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.132658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:42.487 [2024-11-27 11:14:11.132676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:18:42.487 [2024-11-27 11:14:11.132686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.133062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.133084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:42.487 [2024-11-27 11:14:11.133094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:18:42.487 [2024-11-27 11:14:11.133103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.155611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.155825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:42.487 [2024-11-27 11:14:11.155846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.482 ms 00:18:42.487 [2024-11-27 11:14:11.155860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.163931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:42.487 [2024-11-27 11:14:11.181536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.181584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:42.487 [2024-11-27 11:14:11.181599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.573 ms 00:18:42.487 [2024-11-27 11:14:11.181608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.181690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.487 [2024-11-27 11:14:11.181701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:42.487 [2024-11-27 11:14:11.181713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:42.487 [2024-11-27 11:14:11.181724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.487 [2024-11-27 11:14:11.181780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.488 [2024-11-27 11:14:11.181789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:42.488 [2024-11-27 11:14:11.181803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:42.488 [2024-11-27 11:14:11.181811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.488 [2024-11-27 11:14:11.181838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.488 [2024-11-27 11:14:11.181847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:42.488 [2024-11-27 11:14:11.181859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:42.488 [2024-11-27 11:14:11.181867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.488 [2024-11-27 11:14:11.181938] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:42.488 [2024-11-27 11:14:11.181949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.488 [2024-11-27 11:14:11.181959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:42.488 [2024-11-27 11:14:11.181967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:42.488 [2024-11-27 11:14:11.181983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.488 [2024-11-27 11:14:11.187638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.488 [2024-11-27 11:14:11.187689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:42.488 [2024-11-27 11:14:11.187701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.631 ms 00:18:42.488 [2024-11-27 11:14:11.187711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.488 [2024-11-27 11:14:11.187799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.488 [2024-11-27 11:14:11.187816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:42.488 [2024-11-27 11:14:11.187825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:42.488 [2024-11-27 11:14:11.187835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.488 [2024-11-27 11:14:11.188791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:42.488 [2024-11-27 11:14:11.190154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.303 ms, result 0 00:18:42.488 [2024-11-27 11:14:11.192159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:42.488 Some configs were skipped because the RPC state that can call them passed over. 00:18:42.488 11:14:11 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:42.749 [2024-11-27 11:14:11.417865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.750 [2024-11-27 11:14:11.417944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:42.750 [2024-11-27 11:14:11.417963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.137 ms 00:18:42.750 [2024-11-27 11:14:11.417972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.750 [2024-11-27 11:14:11.418012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.305 ms, result 0 00:18:42.750 true 00:18:42.750 11:14:11 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:43.012 [2024-11-27 11:14:11.633868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.012 [2024-11-27 11:14:11.633947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:43.012 [2024-11-27 11:14:11.633961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:18:43.013 [2024-11-27 11:14:11.633971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.634011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.999 ms, result 0 00:18:43.013 true 00:18:43.013 11:14:11 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86018 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86018 ']' 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86018 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86018 00:18:43.013 killing process with pid 86018 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86018' 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86018 00:18:43.013 11:14:11 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86018 00:18:43.013 [2024-11-27 11:14:11.784280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.784330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.013 [2024-11-27 11:14:11.784344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.013 [2024-11-27 11:14:11.784351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.784377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:43.013 [2024-11-27 11:14:11.784812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.784829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.013 [2024-11-27 11:14:11.784839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:18:43.013 [2024-11-27 11:14:11.784848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.785151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.785170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.013 [2024-11-27 11:14:11.785180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:43.013 [2024-11-27 11:14:11.785190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.789710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.789745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.013 [2024-11-27 11:14:11.789761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:18:43.013 [2024-11-27 11:14:11.789770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.796741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.796873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.013 [2024-11-27 11:14:11.796908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.939 ms 00:18:43.013 [2024-11-27 11:14:11.796920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.799178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.799208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.013 [2024-11-27 11:14:11.799216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:18:43.013 [2024-11-27 11:14:11.799225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.803042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.803155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.013 [2024-11-27 11:14:11.803173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.784 ms 00:18:43.013 [2024-11-27 11:14:11.803182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.803306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.803317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.013 [2024-11-27 11:14:11.803331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:43.013 [2024-11-27 11:14:11.803339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.805978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.806012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:43.013 [2024-11-27 11:14:11.806021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:18:43.013 [2024-11-27 11:14:11.806032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.808386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.808420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:43.013 [2024-11-27 11:14:11.808429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.321 ms 00:18:43.013 [2024-11-27 11:14:11.808440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.809998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.810030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.013 [2024-11-27 11:14:11.810039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:18:43.013 [2024-11-27 11:14:11.810048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.812084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-27 11:14:11.812121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.013 [2024-11-27 11:14:11.812130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:18:43.013 [2024-11-27 11:14:11.812140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-27 11:14:11.812174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.013 [2024-11-27 11:14:11.812191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.013 [2024-11-27 11:14:11.812510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.812997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.014 [2024-11-27 11:14:11.813100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.014 [2024-11-27 11:14:11.813107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:18:43.014 [2024-11-27 11:14:11.813117] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.014 [2024-11-27 11:14:11.813124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.014 [2024-11-27 11:14:11.813132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.014 [2024-11-27 11:14:11.813142] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.014 [2024-11-27 11:14:11.813150] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.014 [2024-11-27 11:14:11.813158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.014 [2024-11-27 11:14:11.813176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.014 [2024-11-27 11:14:11.813182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.014 [2024-11-27 11:14:11.813190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.014 [2024-11-27 11:14:11.813198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.014 [2024-11-27 11:14:11.813207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.014 [2024-11-27 11:14:11.813215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:18:43.014 [2024-11-27 11:14:11.813225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.014 [2024-11-27 11:14:11.814631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.014 [2024-11-27 11:14:11.814658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.014 [2024-11-27 11:14:11.814666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:18:43.014 [2024-11-27 11:14:11.814675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.014 [2024-11-27 11:14:11.814751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.014 [2024-11-27 11:14:11.814761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.014 [2024-11-27 11:14:11.814769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:43.014 [2024-11-27 11:14:11.814778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.014 [2024-11-27 11:14:11.820034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.014 [2024-11-27 11:14:11.820066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.014 [2024-11-27 11:14:11.820075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.014 [2024-11-27 11:14:11.820084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.014 [2024-11-27 11:14:11.820161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.014 [2024-11-27 11:14:11.820173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.014 [2024-11-27 11:14:11.820180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.014 [2024-11-27 11:14:11.820191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.014 [2024-11-27 11:14:11.820227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.014 [2024-11-27 11:14:11.820240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.015 [2024-11-27 11:14:11.820247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.820255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.820272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.820282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.015 [2024-11-27 11:14:11.820289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.820297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.829516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.829559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.015 [2024-11-27 11:14:11.829569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.829577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.836677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.836718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.015 [2024-11-27 11:14:11.836728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.836739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.836794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.836805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.015 [2024-11-27 11:14:11.836813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.836825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.836854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.836864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.015 [2024-11-27 11:14:11.836872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.836882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.836984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.836996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.015 [2024-11-27 11:14:11.837008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.837019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.837048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.837059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:43.015 [2024-11-27 11:14:11.837066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.837077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.837112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.837122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.015 [2024-11-27 11:14:11.837130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.837142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.837185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.015 [2024-11-27 11:14:11.837196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.015 [2024-11-27 11:14:11.837205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.015 [2024-11-27 11:14:11.837217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.015 [2024-11-27 11:14:11.837353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.047 ms, result 0 00:18:43.276 11:14:12 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.276 [2024-11-27 11:14:12.089695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:43.276 [2024-11-27 11:14:12.089820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86059 ] 00:18:43.538 [2024-11-27 11:14:12.240825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.538 [2024-11-27 11:14:12.289985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.538 [2024-11-27 11:14:12.398877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.538 [2024-11-27 11:14:12.398963] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.801 [2024-11-27 11:14:12.560063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.560297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.801 [2024-11-27 11:14:12.560322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.801 [2024-11-27 11:14:12.560332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.562905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.562954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.801 [2024-11-27 11:14:12.562968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:18:43.801 [2024-11-27 11:14:12.562976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.563087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.801 [2024-11-27 11:14:12.563349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.801 [2024-11-27 11:14:12.563367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.563376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.801 [2024-11-27 11:14:12.563393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:18:43.801 [2024-11-27 11:14:12.563401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.565260] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.801 [2024-11-27 11:14:12.569289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.569355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.801 [2024-11-27 11:14:12.569366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.032 ms 00:18:43.801 [2024-11-27 11:14:12.569377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.569484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.569496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.801 [2024-11-27 11:14:12.569505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:43.801 [2024-11-27 11:14:12.569513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.577950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.578134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.801 [2024-11-27 11:14:12.578163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.392 ms 00:18:43.801 [2024-11-27 11:14:12.578176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.578329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.578341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.801 [2024-11-27 11:14:12.578351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:43.801 [2024-11-27 11:14:12.578358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.578385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.578396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.801 [2024-11-27 11:14:12.578404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:43.801 [2024-11-27 11:14:12.578412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.578439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:43.801 [2024-11-27 11:14:12.580447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.580481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.801 [2024-11-27 11:14:12.580491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:18:43.801 [2024-11-27 11:14:12.580499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.580543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.580557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.801 [2024-11-27 11:14:12.580568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:43.801 [2024-11-27 11:14:12.580576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.580594] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.801 [2024-11-27 11:14:12.580614] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.801 [2024-11-27 11:14:12.580659] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.801 [2024-11-27 11:14:12.580679] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:43.801 [2024-11-27 11:14:12.580786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.801 [2024-11-27 11:14:12.580797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.801 [2024-11-27 11:14:12.580807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.801 [2024-11-27 11:14:12.580819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.801 [2024-11-27 11:14:12.580835] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.801 [2024-11-27 11:14:12.580843] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:43.801 [2024-11-27 11:14:12.580855] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.801 [2024-11-27 11:14:12.580865] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.801 [2024-11-27 11:14:12.580873] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.801 [2024-11-27 11:14:12.580881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.801 [2024-11-27 11:14:12.580933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.801 [2024-11-27 11:14:12.580948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:18:43.801 [2024-11-27 11:14:12.580961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.801 [2024-11-27 11:14:12.581051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.802 [2024-11-27 11:14:12.581060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.802 [2024-11-27 11:14:12.581067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:43.802 [2024-11-27 11:14:12.581076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.802 [2024-11-27 11:14:12.581179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.802 [2024-11-27 11:14:12.581197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.802 [2024-11-27 11:14:12.581207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.802 [2024-11-27 11:14:12.581237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.802 [2024-11-27 11:14:12.581263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.802 [2024-11-27 11:14:12.581279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.802 [2024-11-27 11:14:12.581287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:43.802 [2024-11-27 11:14:12.581295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.802 [2024-11-27 11:14:12.581303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.802 [2024-11-27 11:14:12.581311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:43.802 [2024-11-27 11:14:12.581319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.802 [2024-11-27 11:14:12.581335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.802 [2024-11-27 11:14:12.581360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.802 [2024-11-27 11:14:12.581386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.802 [2024-11-27 11:14:12.581414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.802 [2024-11-27 11:14:12.581437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.802 [2024-11-27 11:14:12.581460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.802 [2024-11-27 11:14:12.581475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.802 [2024-11-27 11:14:12.581483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:43.802 [2024-11-27 11:14:12.581490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.802 [2024-11-27 11:14:12.581497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.802 [2024-11-27 11:14:12.581505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:43.802 [2024-11-27 11:14:12.581513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.802 [2024-11-27 11:14:12.581530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:43.802 [2024-11-27 11:14:12.581537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581545] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.802 [2024-11-27 11:14:12.581552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.802 [2024-11-27 11:14:12.581560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.802 [2024-11-27 11:14:12.581576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.802 [2024-11-27 11:14:12.581582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.802 [2024-11-27 11:14:12.581589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.802 [2024-11-27 11:14:12.581595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.802 [2024-11-27 11:14:12.581602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.802 [2024-11-27 11:14:12.581609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.802 [2024-11-27 11:14:12.581617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.802 [2024-11-27 11:14:12.581632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.802 [2024-11-27 11:14:12.581641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:43.802 [2024-11-27 11:14:12.581651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:43.802 [2024-11-27 11:14:12.581659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:43.802 [2024-11-27 11:14:12.581666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:43.802 [2024-11-27 11:14:12.581674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:43.802 [2024-11-27 11:14:12.581681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:43.802 [2024-11-27 11:14:12.581688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:43.802 [2024-11-27 11:14:12.581694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:43.802 [2024-11-27 11:14:12.581702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:43.802 [2024-11-27 11:14:12.581709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:43.802 [2024-11-27 11:14:12.581716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:43.802 [2024-11-27 11:14:12.581723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:43.802 [2024-11-27 11:14:12.581731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:43.802 [2024-11-27 11:14:12.581738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:43.802 [2024-11-27 11:14:12.581745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.803 [2024-11-27 11:14:12.581753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.803 [2024-11-27 11:14:12.581761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.803 [2024-11-27 11:14:12.581770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.803 [2024-11-27 11:14:12.581778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.803 [2024-11-27 11:14:12.581785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.803 [2024-11-27 11:14:12.581793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.581800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.803 [2024-11-27 11:14:12.581814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:18:43.803 [2024-11-27 11:14:12.581821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.605645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.605714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.803 [2024-11-27 11:14:12.605733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.771 ms 00:18:43.803 [2024-11-27 11:14:12.605745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.605989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.606009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.803 [2024-11-27 11:14:12.606033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:18:43.803 [2024-11-27 11:14:12.606048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.618854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.618945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.803 [2024-11-27 11:14:12.618958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:18:43.803 [2024-11-27 11:14:12.618966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.619045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.619056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.803 [2024-11-27 11:14:12.619068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:43.803 [2024-11-27 11:14:12.619076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.619560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.619596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.803 [2024-11-27 11:14:12.619608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:18:43.803 [2024-11-27 11:14:12.619625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.619781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.619791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.803 [2024-11-27 11:14:12.619801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:43.803 [2024-11-27 11:14:12.619815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.626981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.627031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.803 [2024-11-27 11:14:12.627041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.141 ms 00:18:43.803 [2024-11-27 11:14:12.627049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.630886] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:43.803 [2024-11-27 11:14:12.630981] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.803 [2024-11-27 11:14:12.630993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.631002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.803 [2024-11-27 11:14:12.631011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.856 ms 00:18:43.803 [2024-11-27 11:14:12.631018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.646655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.646702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.803 [2024-11-27 11:14:12.646715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.570 ms 00:18:43.803 [2024-11-27 11:14:12.646723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.649669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.649716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.803 [2024-11-27 11:14:12.649726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:18:43.803 [2024-11-27 11:14:12.649733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.652423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.652604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.803 [2024-11-27 11:14:12.652633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:18:43.803 [2024-11-27 11:14:12.652640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.653054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.653076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.803 [2024-11-27 11:14:12.653090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:43.803 [2024-11-27 11:14:12.653104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.803 [2024-11-27 11:14:12.675503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.803 [2024-11-27 11:14:12.675705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:43.803 [2024-11-27 11:14:12.675726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.372 ms 00:18:43.803 [2024-11-27 11:14:12.675735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.683805] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:44.066 [2024-11-27 11:14:12.701925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.702116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.066 [2024-11-27 11:14:12.702135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.107 ms 00:18:44.066 [2024-11-27 11:14:12.702144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.702239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.702251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:44.066 [2024-11-27 11:14:12.702261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:44.066 [2024-11-27 11:14:12.702269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.702327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.702337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.066 [2024-11-27 11:14:12.702345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:44.066 [2024-11-27 11:14:12.702354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.702377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.702385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.066 [2024-11-27 11:14:12.702394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.066 [2024-11-27 11:14:12.702402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.702438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:44.066 [2024-11-27 11:14:12.702450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.702458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:44.066 [2024-11-27 11:14:12.702467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:44.066 [2024-11-27 11:14:12.702480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.708186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.708231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.066 [2024-11-27 11:14:12.708243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.681 ms 00:18:44.066 [2024-11-27 11:14:12.708251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.708343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.066 [2024-11-27 11:14:12.708357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.066 [2024-11-27 11:14:12.708366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:44.066 [2024-11-27 11:14:12.708379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.066 [2024-11-27 11:14:12.709446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.066 [2024-11-27 11:14:12.710763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.092 ms, result 0 00:18:44.066 [2024-11-27 11:14:12.711924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.066 [2024-11-27 11:14:12.719384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.068  [2024-11-27T11:14:14.892Z] Copying: 17/256 [MB] (17 MBps) [2024-11-27T11:14:15.836Z] Copying: 28/256 [MB] (11 MBps) [2024-11-27T11:14:16.781Z] Copying: 45/256 [MB] (16 MBps) [2024-11-27T11:14:18.170Z] Copying: 64/256 [MB] (19 MBps) [2024-11-27T11:14:19.114Z] Copying: 80/256 [MB] (16 MBps) [2024-11-27T11:14:20.054Z] Copying: 95/256 [MB] (14 MBps) [2024-11-27T11:14:20.997Z] Copying: 107/256 [MB] (12 MBps) [2024-11-27T11:14:21.940Z] Copying: 124/256 [MB] (16 MBps) [2024-11-27T11:14:22.883Z] Copying: 140/256 [MB] (16 MBps) [2024-11-27T11:14:23.831Z] Copying: 158/256 [MB] (18 MBps) [2024-11-27T11:14:25.214Z] Copying: 174/256 [MB] (15 MBps) [2024-11-27T11:14:25.784Z] Copying: 190/256 [MB] (16 MBps) [2024-11-27T11:14:27.167Z] Copying: 200/256 [MB] (10 MBps) [2024-11-27T11:14:28.109Z] Copying: 215768/262144 [kB] (10168 kBps) [2024-11-27T11:14:29.052Z] Copying: 223/256 [MB] (13 MBps) [2024-11-27T11:14:29.626Z] Copying: 242/256 [MB] (18 MBps) [2024-11-27T11:14:29.889Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-27 11:14:29.724229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.006 [2024-11-27 11:14:29.726733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.726815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.006 [2024-11-27 11:14:29.726851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:01.006 [2024-11-27 11:14:29.726869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.726950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:01.006 [2024-11-27 11:14:29.727810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.727872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.006 [2024-11-27 11:14:29.727919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:19:01.006 [2024-11-27 11:14:29.727940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.728546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.728586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.006 [2024-11-27 11:14:29.728605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:19:01.006 [2024-11-27 11:14:29.728621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.733901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.733950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.006 [2024-11-27 11:14:29.733962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.238 ms 00:19:01.006 [2024-11-27 11:14:29.733976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.741767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.741812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.006 [2024-11-27 11:14:29.741824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.741 ms 00:19:01.006 [2024-11-27 11:14:29.741832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.744806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.744856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.006 [2024-11-27 11:14:29.744868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:19:01.006 [2024-11-27 11:14:29.744917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.749108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.749306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.006 [2024-11-27 11:14:29.749336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:19:01.006 [2024-11-27 11:14:29.749345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.749573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.749602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.006 [2024-11-27 11:14:29.749613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:01.006 [2024-11-27 11:14:29.749621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.752784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.752997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:01.006 [2024-11-27 11:14:29.753018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:19:01.006 [2024-11-27 11:14:29.753026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.755681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.755729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:01.006 [2024-11-27 11:14:29.755739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:19:01.006 [2024-11-27 11:14:29.755745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.758257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.758304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.006 [2024-11-27 11:14:29.758314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:19:01.006 [2024-11-27 11:14:29.758322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.760477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.006 [2024-11-27 11:14:29.760523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.006 [2024-11-27 11:14:29.760532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:19:01.006 [2024-11-27 11:14:29.760539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.006 [2024-11-27 11:14:29.760582] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.006 [2024-11-27 11:14:29.760604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.006 [2024-11-27 11:14:29.760671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.760995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.007 [2024-11-27 11:14:29.761403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.008 [2024-11-27 11:14:29.761412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.008 [2024-11-27 11:14:29.761420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.008 [2024-11-27 11:14:29.761435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.008 [2024-11-27 11:14:29.761443] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5e9e33ba-64fe-44ff-8c18-2e047b084a11 00:19:01.008 [2024-11-27 11:14:29.761462] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.008 [2024-11-27 11:14:29.761474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.008 [2024-11-27 11:14:29.761482] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.008 [2024-11-27 11:14:29.761490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.008 [2024-11-27 11:14:29.761498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.008 [2024-11-27 11:14:29.761507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.008 [2024-11-27 11:14:29.761515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.008 [2024-11-27 11:14:29.761522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.008 [2024-11-27 11:14:29.761528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.008 [2024-11-27 11:14:29.761536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.008 [2024-11-27 11:14:29.761544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.008 [2024-11-27 11:14:29.761556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:19:01.008 [2024-11-27 11:14:29.761564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.763864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.008 [2024-11-27 11:14:29.764035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.008 [2024-11-27 11:14:29.764053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:19:01.008 [2024-11-27 11:14:29.764069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.764202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.008 [2024-11-27 11:14:29.764215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.008 [2024-11-27 11:14:29.764224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:01.008 [2024-11-27 11:14:29.764232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.771602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.771766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.008 [2024-11-27 11:14:29.771783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.771792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.771861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.771873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.008 [2024-11-27 11:14:29.771909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.771917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.771972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.771982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.008 [2024-11-27 11:14:29.771991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.771998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.772017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.772026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.008 [2024-11-27 11:14:29.772037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.772045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.785321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.785372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.008 [2024-11-27 11:14:29.785384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.785393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.795519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.008 [2024-11-27 11:14:29.795531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.795545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.795609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.008 [2024-11-27 11:14:29.795617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.795626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.795665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.008 [2024-11-27 11:14:29.795673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.795684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.795770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.008 [2024-11-27 11:14:29.795778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.795786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.795826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.008 [2024-11-27 11:14:29.795835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.795850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.795930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.008 [2024-11-27 11:14:29.795938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.795947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.795993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.008 [2024-11-27 11:14:29.796002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.008 [2024-11-27 11:14:29.796012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.008 [2024-11-27 11:14:29.796023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.008 [2024-11-27 11:14:29.796168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.428 ms, result 0 00:19:01.270 00:19:01.270 00:19:01.270 11:14:30 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:01.843 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:01.843 11:14:30 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86018 00:19:01.843 11:14:30 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86018 ']' 00:19:01.843 11:14:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86018 00:19:01.843 Process with pid 86018 is not found 00:19:01.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86018) - No such process 00:19:01.843 11:14:30 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86018 is not found' 00:19:01.843 00:19:01.843 real 1m9.808s 00:19:01.843 user 1m28.175s 00:19:01.843 sys 0m5.351s 00:19:01.843 ************************************ 00:19:01.843 END TEST ftl_trim 00:19:01.843 ************************************ 00:19:01.843 11:14:30 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.843 11:14:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:02.106 11:14:30 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:02.106 11:14:30 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:02.106 11:14:30 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.106 11:14:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:02.106 ************************************ 00:19:02.106 START TEST ftl_restore 00:19:02.106 ************************************ 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:02.106 * Looking for test storage... 00:19:02.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.106 11:14:30 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:02.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.106 --rc genhtml_branch_coverage=1 00:19:02.106 --rc genhtml_function_coverage=1 00:19:02.106 --rc genhtml_legend=1 00:19:02.106 --rc geninfo_all_blocks=1 00:19:02.106 --rc geninfo_unexecuted_blocks=1 00:19:02.106 00:19:02.106 ' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:02.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.106 --rc genhtml_branch_coverage=1 00:19:02.106 --rc genhtml_function_coverage=1 00:19:02.106 --rc genhtml_legend=1 00:19:02.106 --rc geninfo_all_blocks=1 00:19:02.106 --rc geninfo_unexecuted_blocks=1 00:19:02.106 00:19:02.106 ' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:02.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.106 --rc genhtml_branch_coverage=1 00:19:02.106 --rc genhtml_function_coverage=1 00:19:02.106 --rc genhtml_legend=1 00:19:02.106 --rc geninfo_all_blocks=1 00:19:02.106 --rc geninfo_unexecuted_blocks=1 00:19:02.106 00:19:02.106 ' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:02.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.106 --rc genhtml_branch_coverage=1 00:19:02.106 --rc genhtml_function_coverage=1 00:19:02.106 --rc genhtml_legend=1 00:19:02.106 --rc geninfo_all_blocks=1 00:19:02.106 --rc geninfo_unexecuted_blocks=1 00:19:02.106 00:19:02.106 ' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.1GGnjh9ajJ 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86322 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86322 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86322 ']' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.106 11:14:30 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.106 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.107 11:14:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:02.367 [2024-11-27 11:14:31.065664] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:02.367 [2024-11-27 11:14:31.066055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86322 ] 00:19:02.367 [2024-11-27 11:14:31.219370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.629 [2024-11-27 11:14:31.281110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.201 11:14:31 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.201 11:14:31 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:03.201 11:14:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:03.201 11:14:31 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:03.201 11:14:31 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:03.201 11:14:31 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:03.201 11:14:31 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:03.201 11:14:31 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:03.462 11:14:32 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:03.462 11:14:32 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:03.462 11:14:32 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:03.462 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:03.462 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:03.462 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:03.462 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:03.462 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:03.724 { 00:19:03.724 "name": "nvme0n1", 00:19:03.724 "aliases": [ 00:19:03.724 "14a23bf0-ffb1-4e2b-b6e3-04362ea4c139" 00:19:03.724 ], 00:19:03.724 "product_name": "NVMe disk", 00:19:03.724 "block_size": 4096, 00:19:03.724 "num_blocks": 1310720, 00:19:03.724 "uuid": "14a23bf0-ffb1-4e2b-b6e3-04362ea4c139", 00:19:03.724 "numa_id": -1, 00:19:03.724 "assigned_rate_limits": { 00:19:03.724 "rw_ios_per_sec": 0, 00:19:03.724 "rw_mbytes_per_sec": 0, 00:19:03.724 "r_mbytes_per_sec": 0, 00:19:03.724 "w_mbytes_per_sec": 0 00:19:03.724 }, 00:19:03.724 "claimed": true, 00:19:03.724 "claim_type": "read_many_write_one", 00:19:03.724 "zoned": false, 00:19:03.724 "supported_io_types": { 00:19:03.724 "read": true, 00:19:03.724 "write": true, 00:19:03.724 "unmap": true, 00:19:03.724 "flush": true, 00:19:03.724 "reset": true, 00:19:03.724 "nvme_admin": true, 00:19:03.724 "nvme_io": true, 00:19:03.724 "nvme_io_md": false, 00:19:03.724 "write_zeroes": true, 00:19:03.724 "zcopy": false, 00:19:03.724 "get_zone_info": false, 00:19:03.724 "zone_management": false, 00:19:03.724 "zone_append": false, 00:19:03.724 "compare": true, 00:19:03.724 "compare_and_write": false, 00:19:03.724 "abort": true, 00:19:03.724 "seek_hole": false, 00:19:03.724 "seek_data": false, 00:19:03.724 "copy": true, 00:19:03.724 "nvme_iov_md": false 00:19:03.724 }, 00:19:03.724 "driver_specific": { 00:19:03.724 "nvme": [ 00:19:03.724 { 00:19:03.724 "pci_address": "0000:00:11.0", 00:19:03.724 "trid": { 00:19:03.724 "trtype": "PCIe", 00:19:03.724 "traddr": "0000:00:11.0" 00:19:03.724 }, 00:19:03.724 "ctrlr_data": { 00:19:03.724 "cntlid": 0, 00:19:03.724 "vendor_id": "0x1b36", 00:19:03.724 "model_number": "QEMU NVMe Ctrl", 00:19:03.724 "serial_number": "12341", 00:19:03.724 "firmware_revision": "8.0.0", 00:19:03.724 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:03.724 "oacs": { 00:19:03.724 "security": 0, 00:19:03.724 "format": 1, 00:19:03.724 "firmware": 0, 00:19:03.724 "ns_manage": 1 00:19:03.724 }, 00:19:03.724 "multi_ctrlr": false, 00:19:03.724 "ana_reporting": false 00:19:03.724 }, 00:19:03.724 "vs": { 00:19:03.724 "nvme_version": "1.4" 00:19:03.724 }, 00:19:03.724 "ns_data": { 00:19:03.724 "id": 1, 00:19:03.724 "can_share": false 00:19:03.724 } 00:19:03.724 } 00:19:03.724 ], 00:19:03.724 "mp_policy": "active_passive" 00:19:03.724 } 00:19:03.724 } 00:19:03.724 ]' 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:03.724 11:14:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:03.724 11:14:32 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:03.724 11:14:32 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:03.724 11:14:32 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:03.724 11:14:32 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:03.724 11:14:32 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:03.985 11:14:32 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1f041955-5ce1-4595-86ad-feb27b37031c 00:19:03.985 11:14:32 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:03.985 11:14:32 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f041955-5ce1-4595-86ad-feb27b37031c 00:19:04.247 11:14:32 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=30ea02a5-d779-4db0-ad2d-33361c80fa93 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 30ea02a5-d779-4db0-ad2d-33361c80fa93 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=889878b5-dbb3-485e-8546-4a04d332d80b 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=889878b5-dbb3-485e-8546-4a04d332d80b 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:04.509 11:14:33 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:04.509 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=889878b5-dbb3-485e-8546-4a04d332d80b 00:19:04.509 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:04.509 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:04.509 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:04.509 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:04.770 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:04.770 { 00:19:04.770 "name": "889878b5-dbb3-485e-8546-4a04d332d80b", 00:19:04.770 "aliases": [ 00:19:04.770 "lvs/nvme0n1p0" 00:19:04.770 ], 00:19:04.770 "product_name": "Logical Volume", 00:19:04.770 "block_size": 4096, 00:19:04.770 "num_blocks": 26476544, 00:19:04.770 "uuid": "889878b5-dbb3-485e-8546-4a04d332d80b", 00:19:04.770 "assigned_rate_limits": { 00:19:04.770 "rw_ios_per_sec": 0, 00:19:04.770 "rw_mbytes_per_sec": 0, 00:19:04.770 "r_mbytes_per_sec": 0, 00:19:04.770 "w_mbytes_per_sec": 0 00:19:04.770 }, 00:19:04.770 "claimed": false, 00:19:04.770 "zoned": false, 00:19:04.770 "supported_io_types": { 00:19:04.770 "read": true, 00:19:04.770 "write": true, 00:19:04.770 "unmap": true, 00:19:04.770 "flush": false, 00:19:04.770 "reset": true, 00:19:04.770 "nvme_admin": false, 00:19:04.770 "nvme_io": false, 00:19:04.770 "nvme_io_md": false, 00:19:04.770 "write_zeroes": true, 00:19:04.770 "zcopy": false, 00:19:04.770 "get_zone_info": false, 00:19:04.770 "zone_management": false, 00:19:04.770 "zone_append": false, 00:19:04.770 "compare": false, 00:19:04.770 "compare_and_write": false, 00:19:04.770 "abort": false, 00:19:04.770 "seek_hole": true, 00:19:04.770 "seek_data": true, 00:19:04.770 "copy": false, 00:19:04.770 "nvme_iov_md": false 00:19:04.770 }, 00:19:04.770 "driver_specific": { 00:19:04.770 "lvol": { 00:19:04.771 "lvol_store_uuid": "30ea02a5-d779-4db0-ad2d-33361c80fa93", 00:19:04.771 "base_bdev": "nvme0n1", 00:19:04.771 "thin_provision": true, 00:19:04.771 "num_allocated_clusters": 0, 00:19:04.771 "snapshot": false, 00:19:04.771 "clone": false, 00:19:04.771 "esnap_clone": false 00:19:04.771 } 00:19:04.771 } 00:19:04.771 } 00:19:04.771 ]' 00:19:04.771 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:04.771 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:04.771 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:04.771 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:04.771 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:04.771 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:04.771 11:14:33 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:04.771 11:14:33 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:04.771 11:14:33 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:05.341 11:14:33 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:05.341 11:14:33 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:05.341 11:14:33 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:05.341 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=889878b5-dbb3-485e-8546-4a04d332d80b 00:19:05.341 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:05.341 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:05.341 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:05.341 11:14:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:05.341 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:05.341 { 00:19:05.341 "name": "889878b5-dbb3-485e-8546-4a04d332d80b", 00:19:05.341 "aliases": [ 00:19:05.341 "lvs/nvme0n1p0" 00:19:05.341 ], 00:19:05.341 "product_name": "Logical Volume", 00:19:05.341 "block_size": 4096, 00:19:05.341 "num_blocks": 26476544, 00:19:05.341 "uuid": "889878b5-dbb3-485e-8546-4a04d332d80b", 00:19:05.341 "assigned_rate_limits": { 00:19:05.341 "rw_ios_per_sec": 0, 00:19:05.341 "rw_mbytes_per_sec": 0, 00:19:05.341 "r_mbytes_per_sec": 0, 00:19:05.341 "w_mbytes_per_sec": 0 00:19:05.341 }, 00:19:05.341 "claimed": false, 00:19:05.341 "zoned": false, 00:19:05.341 "supported_io_types": { 00:19:05.341 "read": true, 00:19:05.341 "write": true, 00:19:05.341 "unmap": true, 00:19:05.341 "flush": false, 00:19:05.341 "reset": true, 00:19:05.341 "nvme_admin": false, 00:19:05.341 "nvme_io": false, 00:19:05.341 "nvme_io_md": false, 00:19:05.341 "write_zeroes": true, 00:19:05.341 "zcopy": false, 00:19:05.341 "get_zone_info": false, 00:19:05.341 "zone_management": false, 00:19:05.341 "zone_append": false, 00:19:05.341 "compare": false, 00:19:05.341 "compare_and_write": false, 00:19:05.341 "abort": false, 00:19:05.341 "seek_hole": true, 00:19:05.341 "seek_data": true, 00:19:05.342 "copy": false, 00:19:05.342 "nvme_iov_md": false 00:19:05.342 }, 00:19:05.342 "driver_specific": { 00:19:05.342 "lvol": { 00:19:05.342 "lvol_store_uuid": "30ea02a5-d779-4db0-ad2d-33361c80fa93", 00:19:05.342 "base_bdev": "nvme0n1", 00:19:05.342 "thin_provision": true, 00:19:05.342 "num_allocated_clusters": 0, 00:19:05.342 "snapshot": false, 00:19:05.342 "clone": false, 00:19:05.342 "esnap_clone": false 00:19:05.342 } 00:19:05.342 } 00:19:05.342 } 00:19:05.342 ]' 00:19:05.342 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:05.342 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:05.342 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:05.342 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:05.342 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:05.342 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:05.342 11:14:34 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:05.342 11:14:34 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:05.602 11:14:34 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:05.602 11:14:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:05.602 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=889878b5-dbb3-485e-8546-4a04d332d80b 00:19:05.602 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:05.602 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:05.602 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:05.602 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 889878b5-dbb3-485e-8546-4a04d332d80b 00:19:05.862 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:05.862 { 00:19:05.862 "name": "889878b5-dbb3-485e-8546-4a04d332d80b", 00:19:05.862 "aliases": [ 00:19:05.862 "lvs/nvme0n1p0" 00:19:05.862 ], 00:19:05.862 "product_name": "Logical Volume", 00:19:05.862 "block_size": 4096, 00:19:05.862 "num_blocks": 26476544, 00:19:05.862 "uuid": "889878b5-dbb3-485e-8546-4a04d332d80b", 00:19:05.862 "assigned_rate_limits": { 00:19:05.862 "rw_ios_per_sec": 0, 00:19:05.862 "rw_mbytes_per_sec": 0, 00:19:05.862 "r_mbytes_per_sec": 0, 00:19:05.862 "w_mbytes_per_sec": 0 00:19:05.862 }, 00:19:05.862 "claimed": false, 00:19:05.862 "zoned": false, 00:19:05.862 "supported_io_types": { 00:19:05.862 "read": true, 00:19:05.862 "write": true, 00:19:05.862 "unmap": true, 00:19:05.862 "flush": false, 00:19:05.862 "reset": true, 00:19:05.862 "nvme_admin": false, 00:19:05.862 "nvme_io": false, 00:19:05.862 "nvme_io_md": false, 00:19:05.862 "write_zeroes": true, 00:19:05.862 "zcopy": false, 00:19:05.862 "get_zone_info": false, 00:19:05.862 "zone_management": false, 00:19:05.862 "zone_append": false, 00:19:05.862 "compare": false, 00:19:05.862 "compare_and_write": false, 00:19:05.862 "abort": false, 00:19:05.862 "seek_hole": true, 00:19:05.862 "seek_data": true, 00:19:05.862 "copy": false, 00:19:05.862 "nvme_iov_md": false 00:19:05.862 }, 00:19:05.863 "driver_specific": { 00:19:05.863 "lvol": { 00:19:05.863 "lvol_store_uuid": "30ea02a5-d779-4db0-ad2d-33361c80fa93", 00:19:05.863 "base_bdev": "nvme0n1", 00:19:05.863 "thin_provision": true, 00:19:05.863 "num_allocated_clusters": 0, 00:19:05.863 "snapshot": false, 00:19:05.863 "clone": false, 00:19:05.863 "esnap_clone": false 00:19:05.863 } 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ]' 00:19:05.863 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:05.863 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:05.863 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:05.863 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:05.863 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:05.863 11:14:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 889878b5-dbb3-485e-8546-4a04d332d80b --l2p_dram_limit 10' 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:05.863 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:05.863 11:14:34 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 889878b5-dbb3-485e-8546-4a04d332d80b --l2p_dram_limit 10 -c nvc0n1p0 00:19:06.123 [2024-11-27 11:14:34.932003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.932062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:06.123 [2024-11-27 11:14:34.932078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:06.123 [2024-11-27 11:14:34.932089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.932156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.932169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.123 [2024-11-27 11:14:34.932177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:06.123 [2024-11-27 11:14:34.932191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.932219] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:06.123 [2024-11-27 11:14:34.932549] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:06.123 [2024-11-27 11:14:34.932566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.932576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.123 [2024-11-27 11:14:34.932589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:19:06.123 [2024-11-27 11:14:34.932599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.932637] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cf8b7018-7e12-468c-960c-3a6629bb4ab1 00:19:06.123 [2024-11-27 11:14:34.934701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.934755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:06.123 [2024-11-27 11:14:34.934771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:06.123 [2024-11-27 11:14:34.934779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.943351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.943388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.123 [2024-11-27 11:14:34.943401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.505 ms 00:19:06.123 [2024-11-27 11:14:34.943409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.943579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.943592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.123 [2024-11-27 11:14:34.943608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:06.123 [2024-11-27 11:14:34.943618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.943678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.943689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:06.123 [2024-11-27 11:14:34.943700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:06.123 [2024-11-27 11:14:34.943708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.943735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.123 [2024-11-27 11:14:34.945954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.945990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.123 [2024-11-27 11:14:34.946004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.230 ms 00:19:06.123 [2024-11-27 11:14:34.946015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.946054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.123 [2024-11-27 11:14:34.946066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:06.123 [2024-11-27 11:14:34.946080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:06.123 [2024-11-27 11:14:34.946094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.123 [2024-11-27 11:14:34.946121] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:06.123 [2024-11-27 11:14:34.946275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:06.124 [2024-11-27 11:14:34.946295] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:06.124 [2024-11-27 11:14:34.946311] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:06.124 [2024-11-27 11:14:34.946323] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946336] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946345] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:06.124 [2024-11-27 11:14:34.946363] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:06.124 [2024-11-27 11:14:34.946371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:06.124 [2024-11-27 11:14:34.946382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:06.124 [2024-11-27 11:14:34.946396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.124 [2024-11-27 11:14:34.946407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:06.124 [2024-11-27 11:14:34.946416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:06.124 [2024-11-27 11:14:34.946425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.124 [2024-11-27 11:14:34.946509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.124 [2024-11-27 11:14:34.946526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:06.124 [2024-11-27 11:14:34.946534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:06.124 [2024-11-27 11:14:34.946543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.124 [2024-11-27 11:14:34.946638] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:06.124 [2024-11-27 11:14:34.946656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:06.124 [2024-11-27 11:14:34.946665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:06.124 [2024-11-27 11:14:34.946691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:06.124 [2024-11-27 11:14:34.946715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.124 [2024-11-27 11:14:34.946731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:06.124 [2024-11-27 11:14:34.946739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:06.124 [2024-11-27 11:14:34.946745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.124 [2024-11-27 11:14:34.946757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:06.124 [2024-11-27 11:14:34.946764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:06.124 [2024-11-27 11:14:34.946773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:06.124 [2024-11-27 11:14:34.946792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:06.124 [2024-11-27 11:14:34.946816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:06.124 [2024-11-27 11:14:34.946841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:06.124 [2024-11-27 11:14:34.946862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:06.124 [2024-11-27 11:14:34.946904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.124 [2024-11-27 11:14:34.946920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:06.124 [2024-11-27 11:14:34.946927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.124 [2024-11-27 11:14:34.946944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:06.124 [2024-11-27 11:14:34.946953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:06.124 [2024-11-27 11:14:34.946959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.124 [2024-11-27 11:14:34.946968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:06.124 [2024-11-27 11:14:34.946975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:06.124 [2024-11-27 11:14:34.946983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.946990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:06.124 [2024-11-27 11:14:34.946999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:06.124 [2024-11-27 11:14:34.947005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.947014] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:06.124 [2024-11-27 11:14:34.947025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:06.124 [2024-11-27 11:14:34.947037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.124 [2024-11-27 11:14:34.947045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.124 [2024-11-27 11:14:34.947054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:06.124 [2024-11-27 11:14:34.947061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:06.124 [2024-11-27 11:14:34.947071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:06.124 [2024-11-27 11:14:34.947079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:06.124 [2024-11-27 11:14:34.947088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:06.124 [2024-11-27 11:14:34.947095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:06.124 [2024-11-27 11:14:34.947108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:06.124 [2024-11-27 11:14:34.947119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:06.124 [2024-11-27 11:14:34.947137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:06.124 [2024-11-27 11:14:34.947148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:06.124 [2024-11-27 11:14:34.947155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:06.124 [2024-11-27 11:14:34.947165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:06.124 [2024-11-27 11:14:34.947172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:06.124 [2024-11-27 11:14:34.947183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:06.124 [2024-11-27 11:14:34.947191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:06.124 [2024-11-27 11:14:34.947200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:06.124 [2024-11-27 11:14:34.947207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:06.124 [2024-11-27 11:14:34.947249] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:06.124 [2024-11-27 11:14:34.947262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:06.124 [2024-11-27 11:14:34.947280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:06.124 [2024-11-27 11:14:34.947290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:06.124 [2024-11-27 11:14:34.947297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:06.124 [2024-11-27 11:14:34.947306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.124 [2024-11-27 11:14:34.947313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:06.124 [2024-11-27 11:14:34.947326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:19:06.124 [2024-11-27 11:14:34.947333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.124 [2024-11-27 11:14:34.947375] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:06.124 [2024-11-27 11:14:34.947384] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:10.327 [2024-11-27 11:14:39.094555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.094627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:10.327 [2024-11-27 11:14:39.094654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4147.157 ms 00:19:10.327 [2024-11-27 11:14:39.094664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.113739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.113791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.327 [2024-11-27 11:14:39.113809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.934 ms 00:19:10.327 [2024-11-27 11:14:39.113818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.113966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.113978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:10.327 [2024-11-27 11:14:39.113996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:10.327 [2024-11-27 11:14:39.114006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.130048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.130089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.327 [2024-11-27 11:14:39.130105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.993 ms 00:19:10.327 [2024-11-27 11:14:39.130114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.130154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.130171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.327 [2024-11-27 11:14:39.130183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.327 [2024-11-27 11:14:39.130192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.130911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.130947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.327 [2024-11-27 11:14:39.130962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:19:10.327 [2024-11-27 11:14:39.130973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.131107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.131116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.327 [2024-11-27 11:14:39.131132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:10.327 [2024-11-27 11:14:39.131141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.152147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.152209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.327 [2024-11-27 11:14:39.152232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.976 ms 00:19:10.327 [2024-11-27 11:14:39.152246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 11:14:39.164102] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:10.327 [2024-11-27 11:14:39.169073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 11:14:39.169117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:10.327 [2024-11-27 11:14:39.169130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.691 ms 00:19:10.327 [2024-11-27 11:14:39.169141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.252519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.252576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:10.586 [2024-11-27 11:14:39.252590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.340 ms 00:19:10.586 [2024-11-27 11:14:39.252606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.252826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.252842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:10.586 [2024-11-27 11:14:39.252852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:19:10.586 [2024-11-27 11:14:39.252863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.258882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.258951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:10.586 [2024-11-27 11:14:39.258963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.997 ms 00:19:10.586 [2024-11-27 11:14:39.258976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.264181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.264230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:10.586 [2024-11-27 11:14:39.264242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:19:10.586 [2024-11-27 11:14:39.264253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.264579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.264593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:10.586 [2024-11-27 11:14:39.264603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:10.586 [2024-11-27 11:14:39.264617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.309855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.309919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:10.586 [2024-11-27 11:14:39.309932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.215 ms 00:19:10.586 [2024-11-27 11:14:39.309945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.317907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.317956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:10.586 [2024-11-27 11:14:39.317968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.918 ms 00:19:10.586 [2024-11-27 11:14:39.317980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.323677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.323725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:10.586 [2024-11-27 11:14:39.323736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:19:10.586 [2024-11-27 11:14:39.323746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.330142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.330190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:10.586 [2024-11-27 11:14:39.330201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:19:10.586 [2024-11-27 11:14:39.330216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.330249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.330263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:10.586 [2024-11-27 11:14:39.330272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:10.586 [2024-11-27 11:14:39.330289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.586 [2024-11-27 11:14:39.330385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.586 [2024-11-27 11:14:39.330399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:10.586 [2024-11-27 11:14:39.330408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:10.587 [2024-11-27 11:14:39.330428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.587 [2024-11-27 11:14:39.331796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4399.247 ms, result 0 00:19:10.587 { 00:19:10.587 "name": "ftl0", 00:19:10.587 "uuid": "cf8b7018-7e12-468c-960c-3a6629bb4ab1" 00:19:10.587 } 00:19:10.587 11:14:39 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:10.587 11:14:39 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:10.847 11:14:39 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:10.847 11:14:39 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:11.110 [2024-11-27 11:14:39.770959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.110 [2024-11-27 11:14:39.771005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:11.110 [2024-11-27 11:14:39.771021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:11.110 [2024-11-27 11:14:39.771030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.110 [2024-11-27 11:14:39.771073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.110 [2024-11-27 11:14:39.772065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.110 [2024-11-27 11:14:39.772109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:11.110 [2024-11-27 11:14:39.772121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:19:11.110 [2024-11-27 11:14:39.772133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.110 [2024-11-27 11:14:39.772399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.110 [2024-11-27 11:14:39.772422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:11.110 [2024-11-27 11:14:39.772432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:19:11.110 [2024-11-27 11:14:39.772444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.110 [2024-11-27 11:14:39.775713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.110 [2024-11-27 11:14:39.775742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:11.110 [2024-11-27 11:14:39.775757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:19:11.110 [2024-11-27 11:14:39.775767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.782304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.782344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:11.111 [2024-11-27 11:14:39.782355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.518 ms 00:19:11.111 [2024-11-27 11:14:39.782366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.785294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.785354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:11.111 [2024-11-27 11:14:39.785365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.837 ms 00:19:11.111 [2024-11-27 11:14:39.785376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.793046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.793098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:11.111 [2024-11-27 11:14:39.793112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.622 ms 00:19:11.111 [2024-11-27 11:14:39.793125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.793262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.793277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:11.111 [2024-11-27 11:14:39.793288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:11.111 [2024-11-27 11:14:39.793301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.796432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.796479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:11.111 [2024-11-27 11:14:39.796489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:19:11.111 [2024-11-27 11:14:39.796498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.799396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.799448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:11.111 [2024-11-27 11:14:39.799459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:19:11.111 [2024-11-27 11:14:39.799470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.801957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.802005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:11.111 [2024-11-27 11:14:39.802015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:19:11.111 [2024-11-27 11:14:39.802026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.804270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.111 [2024-11-27 11:14:39.804319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:11.111 [2024-11-27 11:14:39.804330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:19:11.111 [2024-11-27 11:14:39.804341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.111 [2024-11-27 11:14:39.804386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:11.111 [2024-11-27 11:14:39.804408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:11.111 [2024-11-27 11:14:39.804821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.804994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:11.112 [2024-11-27 11:14:39.805447] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:11.112 [2024-11-27 11:14:39.805457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cf8b7018-7e12-468c-960c-3a6629bb4ab1 00:19:11.112 [2024-11-27 11:14:39.805467] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:11.112 [2024-11-27 11:14:39.805475] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:11.112 [2024-11-27 11:14:39.805486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:11.112 [2024-11-27 11:14:39.805495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:11.112 [2024-11-27 11:14:39.805506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:11.112 [2024-11-27 11:14:39.805513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:11.112 [2024-11-27 11:14:39.805523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:11.112 [2024-11-27 11:14:39.805529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:11.112 [2024-11-27 11:14:39.805538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:11.112 [2024-11-27 11:14:39.805545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.112 [2024-11-27 11:14:39.805560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:11.113 [2024-11-27 11:14:39.805569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:19:11.113 [2024-11-27 11:14:39.805578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.807969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.113 [2024-11-27 11:14:39.808005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:11.113 [2024-11-27 11:14:39.808016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:19:11.113 [2024-11-27 11:14:39.808028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.808134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.113 [2024-11-27 11:14:39.808148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:11.113 [2024-11-27 11:14:39.808159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:11.113 [2024-11-27 11:14:39.808171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.818855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.818951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.113 [2024-11-27 11:14:39.818964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.818975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.819049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.819061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.113 [2024-11-27 11:14:39.819071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.819082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.819175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.819193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.113 [2024-11-27 11:14:39.819202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.819214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.819237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.819252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.113 [2024-11-27 11:14:39.819261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.819274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.839104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.839159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.113 [2024-11-27 11:14:39.839172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.839184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.855329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.855387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.113 [2024-11-27 11:14:39.855399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.855415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.855522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.855540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.113 [2024-11-27 11:14:39.855549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.855562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.855614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.855628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.113 [2024-11-27 11:14:39.855639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.855650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.855738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.855754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.113 [2024-11-27 11:14:39.855762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.855774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.855815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.855829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:11.113 [2024-11-27 11:14:39.855838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.855854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.855951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.855970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.113 [2024-11-27 11:14:39.855980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.855993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.856057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.113 [2024-11-27 11:14:39.856072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.113 [2024-11-27 11:14:39.856085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.113 [2024-11-27 11:14:39.856097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.113 [2024-11-27 11:14:39.856274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.263 ms, result 0 00:19:11.113 true 00:19:11.113 11:14:39 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86322 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86322 ']' 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86322 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86322 00:19:11.113 killing process with pid 86322 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86322' 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86322 00:19:11.113 11:14:39 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86322 00:19:16.505 11:14:44 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:20.711 262144+0 records in 00:19:20.711 262144+0 records out 00:19:20.711 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.21259 s, 255 MB/s 00:19:20.711 11:14:49 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:22.623 11:14:51 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.623 [2024-11-27 11:14:51.258039] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:22.623 [2024-11-27 11:14:51.258151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86552 ] 00:19:22.623 [2024-11-27 11:14:51.407047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.623 [2024-11-27 11:14:51.454203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.884 [2024-11-27 11:14:51.565670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.884 [2024-11-27 11:14:51.565740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.884 [2024-11-27 11:14:51.725397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.884 [2024-11-27 11:14:51.725438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.884 [2024-11-27 11:14:51.725455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:22.884 [2024-11-27 11:14:51.725464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.884 [2024-11-27 11:14:51.725511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.884 [2024-11-27 11:14:51.725521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.884 [2024-11-27 11:14:51.725535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:22.885 [2024-11-27 11:14:51.725547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.725567] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.885 [2024-11-27 11:14:51.725811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.885 [2024-11-27 11:14:51.725827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.725835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.885 [2024-11-27 11:14:51.725854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:22.885 [2024-11-27 11:14:51.725864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.727566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.885 [2024-11-27 11:14:51.731334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.731378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.885 [2024-11-27 11:14:51.731390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:19:22.885 [2024-11-27 11:14:51.731398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.731466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.731476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.885 [2024-11-27 11:14:51.731488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:22.885 [2024-11-27 11:14:51.731496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.740295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.740326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.885 [2024-11-27 11:14:51.740337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.755 ms 00:19:22.885 [2024-11-27 11:14:51.740351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.740448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.740457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.885 [2024-11-27 11:14:51.740466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:22.885 [2024-11-27 11:14:51.740474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.740540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.740550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.885 [2024-11-27 11:14:51.740559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:22.885 [2024-11-27 11:14:51.740566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.740595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.885 [2024-11-27 11:14:51.742760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.742789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.885 [2024-11-27 11:14:51.742800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:19:22.885 [2024-11-27 11:14:51.742808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.742842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.742850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.885 [2024-11-27 11:14:51.742859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:22.885 [2024-11-27 11:14:51.742867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.742911] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.885 [2024-11-27 11:14:51.742944] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.885 [2024-11-27 11:14:51.742986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.885 [2024-11-27 11:14:51.743007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.885 [2024-11-27 11:14:51.743117] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.885 [2024-11-27 11:14:51.743129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.885 [2024-11-27 11:14:51.743140] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.885 [2024-11-27 11:14:51.743152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.885 [2024-11-27 11:14:51.743164] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.885 [2024-11-27 11:14:51.743173] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.885 [2024-11-27 11:14:51.743181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.885 [2024-11-27 11:14:51.743189] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.885 [2024-11-27 11:14:51.743198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.885 [2024-11-27 11:14:51.743205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.743217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.885 [2024-11-27 11:14:51.743225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:22.885 [2024-11-27 11:14:51.743237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.743329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.885 [2024-11-27 11:14:51.743341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.885 [2024-11-27 11:14:51.743349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:22.885 [2024-11-27 11:14:51.743356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.885 [2024-11-27 11:14:51.743459] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.885 [2024-11-27 11:14:51.743471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.885 [2024-11-27 11:14:51.743480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.885 [2024-11-27 11:14:51.743499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.885 [2024-11-27 11:14:51.743508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.885 [2024-11-27 11:14:51.743517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.885 [2024-11-27 11:14:51.743525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.885 [2024-11-27 11:14:51.743534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.885 [2024-11-27 11:14:51.743542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.885 [2024-11-27 11:14:51.743550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.885 [2024-11-27 11:14:51.743559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.885 [2024-11-27 11:14:51.743568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.885 [2024-11-27 11:14:51.743579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.885 [2024-11-27 11:14:51.743588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.885 [2024-11-27 11:14:51.743596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:22.885 [2024-11-27 11:14:51.743603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.885 [2024-11-27 11:14:51.743611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.885 [2024-11-27 11:14:51.743619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:22.885 [2024-11-27 11:14:51.743626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.885 [2024-11-27 11:14:51.743635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.885 [2024-11-27 11:14:51.743643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.885 [2024-11-27 11:14:51.743651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.885 [2024-11-27 11:14:51.743658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.885 [2024-11-27 11:14:51.743667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.886 [2024-11-27 11:14:51.743682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.886 [2024-11-27 11:14:51.743690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.886 [2024-11-27 11:14:51.743709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.886 [2024-11-27 11:14:51.743718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.886 [2024-11-27 11:14:51.743735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.886 [2024-11-27 11:14:51.743742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.886 [2024-11-27 11:14:51.743757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.886 [2024-11-27 11:14:51.743765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:22.886 [2024-11-27 11:14:51.743773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.886 [2024-11-27 11:14:51.743781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.886 [2024-11-27 11:14:51.743788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:22.886 [2024-11-27 11:14:51.743794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.886 [2024-11-27 11:14:51.743807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:22.886 [2024-11-27 11:14:51.743814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743824] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.886 [2024-11-27 11:14:51.743834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.886 [2024-11-27 11:14:51.743842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.886 [2024-11-27 11:14:51.743851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.886 [2024-11-27 11:14:51.743859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.886 [2024-11-27 11:14:51.743866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.886 [2024-11-27 11:14:51.743873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.886 [2024-11-27 11:14:51.743879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.886 [2024-11-27 11:14:51.743900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.886 [2024-11-27 11:14:51.743908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.886 [2024-11-27 11:14:51.743916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.886 [2024-11-27 11:14:51.743925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.743934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.886 [2024-11-27 11:14:51.743942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:22.886 [2024-11-27 11:14:51.743950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:22.886 [2024-11-27 11:14:51.743957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:22.886 [2024-11-27 11:14:51.743964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:22.886 [2024-11-27 11:14:51.743974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:22.886 [2024-11-27 11:14:51.743981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:22.886 [2024-11-27 11:14:51.743990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:22.886 [2024-11-27 11:14:51.743997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:22.886 [2024-11-27 11:14:51.744005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.744012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.744019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.744030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.744037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:22.886 [2024-11-27 11:14:51.744044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.886 [2024-11-27 11:14:51.744053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.744067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.886 [2024-11-27 11:14:51.744076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.886 [2024-11-27 11:14:51.744084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.886 [2024-11-27 11:14:51.744092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.886 [2024-11-27 11:14:51.744100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.886 [2024-11-27 11:14:51.744111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.886 [2024-11-27 11:14:51.744119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:19:22.886 [2024-11-27 11:14:51.744127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.767997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.768045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.149 [2024-11-27 11:14:51.768066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.821 ms 00:19:23.149 [2024-11-27 11:14:51.768081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.768188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.768200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:23.149 [2024-11-27 11:14:51.768210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:23.149 [2024-11-27 11:14:51.768219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.783307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.783348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.149 [2024-11-27 11:14:51.783359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.021 ms 00:19:23.149 [2024-11-27 11:14:51.783368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.783405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.783421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.149 [2024-11-27 11:14:51.783431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:23.149 [2024-11-27 11:14:51.783439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.784140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.784181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.149 [2024-11-27 11:14:51.784193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:19:23.149 [2024-11-27 11:14:51.784204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.784381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.784392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.149 [2024-11-27 11:14:51.784402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:23.149 [2024-11-27 11:14:51.784411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.793271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.793309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.149 [2024-11-27 11:14:51.793328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.830 ms 00:19:23.149 [2024-11-27 11:14:51.793337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.797833] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:23.149 [2024-11-27 11:14:51.797884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.149 [2024-11-27 11:14:51.797914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.797924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.149 [2024-11-27 11:14:51.797933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.457 ms 00:19:23.149 [2024-11-27 11:14:51.797941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.814155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.814204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.149 [2024-11-27 11:14:51.814217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.158 ms 00:19:23.149 [2024-11-27 11:14:51.814229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.817128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.817168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.149 [2024-11-27 11:14:51.817177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:19:23.149 [2024-11-27 11:14:51.817186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.819626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.819664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.149 [2024-11-27 11:14:51.819674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:19:23.149 [2024-11-27 11:14:51.819682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.820070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.820084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.149 [2024-11-27 11:14:51.820095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:23.149 [2024-11-27 11:14:51.820109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.849681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.849742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.149 [2024-11-27 11:14:51.849766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.553 ms 00:19:23.149 [2024-11-27 11:14:51.849776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.858451] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:23.149 [2024-11-27 11:14:51.862127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.862165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.149 [2024-11-27 11:14:51.862177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.292 ms 00:19:23.149 [2024-11-27 11:14:51.862199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.862283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.862295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.149 [2024-11-27 11:14:51.862311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:23.149 [2024-11-27 11:14:51.862321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.862399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.862410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.149 [2024-11-27 11:14:51.862420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:23.149 [2024-11-27 11:14:51.862428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.149 [2024-11-27 11:14:51.862455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.149 [2024-11-27 11:14:51.862466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.150 [2024-11-27 11:14:51.862476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:23.150 [2024-11-27 11:14:51.862484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.150 [2024-11-27 11:14:51.862527] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.150 [2024-11-27 11:14:51.862538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.150 [2024-11-27 11:14:51.862549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.150 [2024-11-27 11:14:51.862558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:23.150 [2024-11-27 11:14:51.862566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.150 [2024-11-27 11:14:51.868699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.150 [2024-11-27 11:14:51.868752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.150 [2024-11-27 11:14:51.868765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:19:23.150 [2024-11-27 11:14:51.868774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.150 [2024-11-27 11:14:51.868864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.150 [2024-11-27 11:14:51.868875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.150 [2024-11-27 11:14:51.868885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:23.150 [2024-11-27 11:14:51.868929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.150 [2024-11-27 11:14:51.870244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.297 ms, result 0 00:19:24.088  [2024-11-27T11:14:53.909Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-27T11:14:55.307Z] Copying: 46/1024 [MB] (19 MBps) [2024-11-27T11:14:56.242Z] Copying: 72/1024 [MB] (25 MBps) [2024-11-27T11:14:57.181Z] Copying: 93/1024 [MB] (20 MBps) [2024-11-27T11:14:58.123Z] Copying: 113/1024 [MB] (20 MBps) [2024-11-27T11:14:59.068Z] Copying: 133/1024 [MB] (20 MBps) [2024-11-27T11:15:00.007Z] Copying: 153/1024 [MB] (19 MBps) [2024-11-27T11:15:00.947Z] Copying: 176/1024 [MB] (22 MBps) [2024-11-27T11:15:01.891Z] Copying: 199/1024 [MB] (23 MBps) [2024-11-27T11:15:03.280Z] Copying: 218/1024 [MB] (18 MBps) [2024-11-27T11:15:04.223Z] Copying: 229/1024 [MB] (11 MBps) [2024-11-27T11:15:05.168Z] Copying: 239/1024 [MB] (10 MBps) [2024-11-27T11:15:06.111Z] Copying: 250/1024 [MB] (11 MBps) [2024-11-27T11:15:07.054Z] Copying: 263/1024 [MB] (13 MBps) [2024-11-27T11:15:07.998Z] Copying: 276/1024 [MB] (12 MBps) [2024-11-27T11:15:08.943Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-27T11:15:09.888Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-27T11:15:11.276Z] Copying: 308/1024 [MB] (10 MBps) [2024-11-27T11:15:12.218Z] Copying: 324/1024 [MB] (15 MBps) [2024-11-27T11:15:13.163Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-27T11:15:14.143Z] Copying: 351/1024 [MB] (15 MBps) [2024-11-27T11:15:15.085Z] Copying: 363/1024 [MB] (11 MBps) [2024-11-27T11:15:16.081Z] Copying: 374/1024 [MB] (11 MBps) [2024-11-27T11:15:17.023Z] Copying: 385/1024 [MB] (10 MBps) [2024-11-27T11:15:17.967Z] Copying: 395/1024 [MB] (10 MBps) [2024-11-27T11:15:18.908Z] Copying: 406/1024 [MB] (10 MBps) [2024-11-27T11:15:20.296Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-27T11:15:21.242Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-27T11:15:22.184Z] Copying: 438/1024 [MB] (10 MBps) [2024-11-27T11:15:23.128Z] Copying: 449/1024 [MB] (10 MBps) [2024-11-27T11:15:24.071Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-27T11:15:25.017Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-27T11:15:25.961Z] Copying: 492512/1048576 [kB] (10152 kBps) [2024-11-27T11:15:26.905Z] Copying: 491/1024 [MB] (10 MBps) [2024-11-27T11:15:28.291Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-27T11:15:29.237Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-27T11:15:30.179Z] Copying: 525/1024 [MB] (11 MBps) [2024-11-27T11:15:31.124Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-27T11:15:32.068Z] Copying: 547/1024 [MB] (11 MBps) [2024-11-27T11:15:33.013Z] Copying: 558/1024 [MB] (11 MBps) [2024-11-27T11:15:33.958Z] Copying: 569/1024 [MB] (11 MBps) [2024-11-27T11:15:34.905Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-27T11:15:36.294Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-27T11:15:37.238Z] Copying: 602/1024 [MB] (11 MBps) [2024-11-27T11:15:38.183Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-27T11:15:39.127Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-27T11:15:40.071Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-27T11:15:41.017Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-27T11:15:41.961Z] Copying: 658/1024 [MB] (11 MBps) [2024-11-27T11:15:42.906Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-27T11:15:44.294Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-27T11:15:44.905Z] Copying: 690/1024 [MB] (11 MBps) [2024-11-27T11:15:46.308Z] Copying: 700/1024 [MB] (10 MBps) [2024-11-27T11:15:46.881Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-27T11:15:47.911Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-27T11:15:49.297Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-27T11:15:50.240Z] Copying: 743/1024 [MB] (10 MBps) [2024-11-27T11:15:51.185Z] Copying: 756/1024 [MB] (12 MBps) [2024-11-27T11:15:52.131Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-27T11:15:53.076Z] Copying: 778/1024 [MB] (10 MBps) [2024-11-27T11:15:54.022Z] Copying: 791/1024 [MB] (12 MBps) [2024-11-27T11:15:54.965Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-27T11:15:55.912Z] Copying: 812/1024 [MB] (10 MBps) [2024-11-27T11:15:57.301Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-27T11:15:58.246Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-27T11:15:59.191Z] Copying: 843/1024 [MB] (10 MBps) [2024-11-27T11:16:00.135Z] Copying: 853/1024 [MB] (10 MBps) [2024-11-27T11:16:01.078Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-27T11:16:02.024Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-27T11:16:02.968Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-27T11:16:03.910Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-27T11:16:05.290Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-27T11:16:06.231Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-27T11:16:07.171Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-27T11:16:08.104Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-27T11:16:08.670Z] Copying: 993/1024 [MB] (54 MBps) [2024-11-27T11:16:08.670Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-27 11:16:08.438962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.438997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.787 [2024-11-27 11:16:08.439008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:39.787 [2024-11-27 11:16:08.439015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.787 [2024-11-27 11:16:08.439051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.787 [2024-11-27 11:16:08.439441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.439454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.787 [2024-11-27 11:16:08.439462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:20:39.787 [2024-11-27 11:16:08.439473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.787 [2024-11-27 11:16:08.440730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.440752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.787 [2024-11-27 11:16:08.440760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:20:39.787 [2024-11-27 11:16:08.440766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.787 [2024-11-27 11:16:08.450954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.450986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.787 [2024-11-27 11:16:08.450994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.176 ms 00:20:39.787 [2024-11-27 11:16:08.451000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.787 [2024-11-27 11:16:08.455807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.455830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.787 [2024-11-27 11:16:08.455837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:20:39.787 [2024-11-27 11:16:08.455843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.787 [2024-11-27 11:16:08.456832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.456859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.787 [2024-11-27 11:16:08.456866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:20:39.787 [2024-11-27 11:16:08.456871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.787 [2024-11-27 11:16:08.460168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.787 [2024-11-27 11:16:08.460200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.788 [2024-11-27 11:16:08.460207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:20:39.788 [2024-11-27 11:16:08.460212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.788 [2024-11-27 11:16:08.460294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.788 [2024-11-27 11:16:08.460301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.788 [2024-11-27 11:16:08.460307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:39.788 [2024-11-27 11:16:08.460313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.788 [2024-11-27 11:16:08.462004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.788 [2024-11-27 11:16:08.462030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.788 [2024-11-27 11:16:08.462036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:20:39.788 [2024-11-27 11:16:08.462041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.788 [2024-11-27 11:16:08.463428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.788 [2024-11-27 11:16:08.463473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.788 [2024-11-27 11:16:08.463482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:20:39.788 [2024-11-27 11:16:08.463487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.788 [2024-11-27 11:16:08.464294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.788 [2024-11-27 11:16:08.464322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.788 [2024-11-27 11:16:08.464328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:20:39.788 [2024-11-27 11:16:08.464334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.788 [2024-11-27 11:16:08.465235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.788 [2024-11-27 11:16:08.465260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.788 [2024-11-27 11:16:08.465267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:20:39.788 [2024-11-27 11:16:08.465272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.788 [2024-11-27 11:16:08.465294] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.788 [2024-11-27 11:16:08.465305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.788 [2024-11-27 11:16:08.465640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.789 [2024-11-27 11:16:08.465882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.789 [2024-11-27 11:16:08.465897] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cf8b7018-7e12-468c-960c-3a6629bb4ab1 00:20:39.789 [2024-11-27 11:16:08.465904] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.789 [2024-11-27 11:16:08.465909] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.789 [2024-11-27 11:16:08.465914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.789 [2024-11-27 11:16:08.465920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.789 [2024-11-27 11:16:08.465925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.789 [2024-11-27 11:16:08.465931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.789 [2024-11-27 11:16:08.465936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.789 [2024-11-27 11:16:08.465941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.789 [2024-11-27 11:16:08.465946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.789 [2024-11-27 11:16:08.465951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.789 [2024-11-27 11:16:08.465957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.789 [2024-11-27 11:16:08.465963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:20:39.789 [2024-11-27 11:16:08.465973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.467157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.789 [2024-11-27 11:16:08.467182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.789 [2024-11-27 11:16:08.467193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:20:39.789 [2024-11-27 11:16:08.467198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.467273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.789 [2024-11-27 11:16:08.467280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.789 [2024-11-27 11:16:08.467292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:39.789 [2024-11-27 11:16:08.467300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.470943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.789 [2024-11-27 11:16:08.470963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.789 [2024-11-27 11:16:08.470974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.789 [2024-11-27 11:16:08.470983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.471022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.789 [2024-11-27 11:16:08.471028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.789 [2024-11-27 11:16:08.471037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.789 [2024-11-27 11:16:08.471042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.471068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.789 [2024-11-27 11:16:08.471077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.789 [2024-11-27 11:16:08.471082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.789 [2024-11-27 11:16:08.471090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.471103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.789 [2024-11-27 11:16:08.471109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.789 [2024-11-27 11:16:08.471114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.789 [2024-11-27 11:16:08.471122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.478369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.789 [2024-11-27 11:16:08.478402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.789 [2024-11-27 11:16:08.478410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.789 [2024-11-27 11:16:08.478416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.789 [2024-11-27 11:16:08.484407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.789 [2024-11-27 11:16:08.484438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.789 [2024-11-27 11:16:08.484450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.790 [2024-11-27 11:16:08.484496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.790 [2024-11-27 11:16:08.484502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.790 [2024-11-27 11:16:08.484536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.790 [2024-11-27 11:16:08.484542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.790 [2024-11-27 11:16:08.484606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.790 [2024-11-27 11:16:08.484612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.790 [2024-11-27 11:16:08.484648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.790 [2024-11-27 11:16:08.484654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.790 [2024-11-27 11:16:08.484699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.790 [2024-11-27 11:16:08.484705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.790 [2024-11-27 11:16:08.484749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.790 [2024-11-27 11:16:08.484756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.790 [2024-11-27 11:16:08.484762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.790 [2024-11-27 11:16:08.484851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.868 ms, result 0 00:20:40.049 00:20:40.049 00:20:40.049 11:16:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:40.049 [2024-11-27 11:16:08.905309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:40.049 [2024-11-27 11:16:08.905436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87361 ] 00:20:40.311 [2024-11-27 11:16:09.057335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.311 [2024-11-27 11:16:09.106816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.575 [2024-11-27 11:16:09.222395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.575 [2024-11-27 11:16:09.222467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.575 [2024-11-27 11:16:09.383366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.383419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.575 [2024-11-27 11:16:09.383437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.575 [2024-11-27 11:16:09.383446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.383499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.383510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.575 [2024-11-27 11:16:09.383519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:40.575 [2024-11-27 11:16:09.383528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.383549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.575 [2024-11-27 11:16:09.383954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.575 [2024-11-27 11:16:09.383992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.384001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.575 [2024-11-27 11:16:09.384014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:20:40.575 [2024-11-27 11:16:09.384025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.385709] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.575 [2024-11-27 11:16:09.389325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.389366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.575 [2024-11-27 11:16:09.389378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:20:40.575 [2024-11-27 11:16:09.389387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.389462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.389472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.575 [2024-11-27 11:16:09.389484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:40.575 [2024-11-27 11:16:09.389497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.397367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.397401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.575 [2024-11-27 11:16:09.397411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.825 ms 00:20:40.575 [2024-11-27 11:16:09.397425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.397527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.397540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.575 [2024-11-27 11:16:09.397549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:40.575 [2024-11-27 11:16:09.397557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.397612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.397628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.575 [2024-11-27 11:16:09.397636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:40.575 [2024-11-27 11:16:09.397643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.397674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.575 [2024-11-27 11:16:09.399680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.399712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.575 [2024-11-27 11:16:09.399729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:20:40.575 [2024-11-27 11:16:09.399737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.399772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.399781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.575 [2024-11-27 11:16:09.399789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:40.575 [2024-11-27 11:16:09.399797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.399819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.575 [2024-11-27 11:16:09.399848] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.575 [2024-11-27 11:16:09.399913] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.575 [2024-11-27 11:16:09.399930] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.575 [2024-11-27 11:16:09.400043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.575 [2024-11-27 11:16:09.400054] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.575 [2024-11-27 11:16:09.400064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.575 [2024-11-27 11:16:09.400075] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400091] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400099] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.575 [2024-11-27 11:16:09.400107] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.575 [2024-11-27 11:16:09.400114] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.575 [2024-11-27 11:16:09.400122] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.575 [2024-11-27 11:16:09.400130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.400138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.575 [2024-11-27 11:16:09.400147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:20:40.575 [2024-11-27 11:16:09.400154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.400240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.575 [2024-11-27 11:16:09.400251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.575 [2024-11-27 11:16:09.400259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:40.575 [2024-11-27 11:16:09.400266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.575 [2024-11-27 11:16:09.400364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.575 [2024-11-27 11:16:09.400375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.575 [2024-11-27 11:16:09.400385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.575 [2024-11-27 11:16:09.400417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.575 [2024-11-27 11:16:09.400443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.575 [2024-11-27 11:16:09.400459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.575 [2024-11-27 11:16:09.400466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.575 [2024-11-27 11:16:09.400476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.575 [2024-11-27 11:16:09.400484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.575 [2024-11-27 11:16:09.400492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:40.575 [2024-11-27 11:16:09.400499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.575 [2024-11-27 11:16:09.400516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.575 [2024-11-27 11:16:09.400542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.575 [2024-11-27 11:16:09.400566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.575 [2024-11-27 11:16:09.400589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.575 [2024-11-27 11:16:09.400620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:40.575 [2024-11-27 11:16:09.400627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.575 [2024-11-27 11:16:09.400635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.575 [2024-11-27 11:16:09.400643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:40.576 [2024-11-27 11:16:09.400651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.576 [2024-11-27 11:16:09.400659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.576 [2024-11-27 11:16:09.400667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:40.576 [2024-11-27 11:16:09.400675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.576 [2024-11-27 11:16:09.400683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.576 [2024-11-27 11:16:09.400691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:40.576 [2024-11-27 11:16:09.400699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.576 [2024-11-27 11:16:09.400706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.576 [2024-11-27 11:16:09.400713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:40.576 [2024-11-27 11:16:09.400720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.576 [2024-11-27 11:16:09.400727] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.576 [2024-11-27 11:16:09.400737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.576 [2024-11-27 11:16:09.400744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.576 [2024-11-27 11:16:09.400754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.576 [2024-11-27 11:16:09.400762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.576 [2024-11-27 11:16:09.400769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.576 [2024-11-27 11:16:09.400776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.576 [2024-11-27 11:16:09.400784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.576 [2024-11-27 11:16:09.400791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.576 [2024-11-27 11:16:09.400798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.576 [2024-11-27 11:16:09.400807] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.576 [2024-11-27 11:16:09.400816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.576 [2024-11-27 11:16:09.400833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:40.576 [2024-11-27 11:16:09.400841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:40.576 [2024-11-27 11:16:09.400848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:40.576 [2024-11-27 11:16:09.400855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:40.576 [2024-11-27 11:16:09.400864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:40.576 [2024-11-27 11:16:09.400871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:40.576 [2024-11-27 11:16:09.400879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:40.576 [2024-11-27 11:16:09.400911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:40.576 [2024-11-27 11:16:09.400920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:40.576 [2024-11-27 11:16:09.400957] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.576 [2024-11-27 11:16:09.400967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.576 [2024-11-27 11:16:09.400985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.576 [2024-11-27 11:16:09.400994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.576 [2024-11-27 11:16:09.401002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.576 [2024-11-27 11:16:09.401010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.401021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.576 [2024-11-27 11:16:09.401029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:20:40.576 [2024-11-27 11:16:09.401037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.423802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.423863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.576 [2024-11-27 11:16:09.423908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.717 ms 00:20:40.576 [2024-11-27 11:16:09.423931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.424078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.424098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.576 [2024-11-27 11:16:09.424110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:40.576 [2024-11-27 11:16:09.424121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.436444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.436487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.576 [2024-11-27 11:16:09.436498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.233 ms 00:20:40.576 [2024-11-27 11:16:09.436506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.436541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.436556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.576 [2024-11-27 11:16:09.436568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:40.576 [2024-11-27 11:16:09.436576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.437204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.437234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.576 [2024-11-27 11:16:09.437251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:20:40.576 [2024-11-27 11:16:09.437260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.437407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.437427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.576 [2024-11-27 11:16:09.437437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:40.576 [2024-11-27 11:16:09.437446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.444096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.444136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.576 [2024-11-27 11:16:09.444149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.625 ms 00:20:40.576 [2024-11-27 11:16:09.444157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.576 [2024-11-27 11:16:09.447849] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:40.576 [2024-11-27 11:16:09.447906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.576 [2024-11-27 11:16:09.447929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.576 [2024-11-27 11:16:09.447938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.576 [2024-11-27 11:16:09.447947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:20:40.576 [2024-11-27 11:16:09.447954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.463428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.463475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.839 [2024-11-27 11:16:09.463491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.424 ms 00:20:40.839 [2024-11-27 11:16:09.463499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.466365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.466405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.839 [2024-11-27 11:16:09.466416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:20:40.839 [2024-11-27 11:16:09.466423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.468802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.468839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.839 [2024-11-27 11:16:09.468850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:20:40.839 [2024-11-27 11:16:09.468858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.469248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.469267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.839 [2024-11-27 11:16:09.469281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:40.839 [2024-11-27 11:16:09.469289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.491321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.491383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.839 [2024-11-27 11:16:09.491396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.014 ms 00:20:40.839 [2024-11-27 11:16:09.491410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.499429] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.839 [2024-11-27 11:16:09.502440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.502478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.839 [2024-11-27 11:16:09.502497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.979 ms 00:20:40.839 [2024-11-27 11:16:09.502506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.502586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.839 [2024-11-27 11:16:09.502602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.839 [2024-11-27 11:16:09.502616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:40.839 [2024-11-27 11:16:09.502624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.839 [2024-11-27 11:16:09.502693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.840 [2024-11-27 11:16:09.502704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.840 [2024-11-27 11:16:09.502712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:40.840 [2024-11-27 11:16:09.502724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.840 [2024-11-27 11:16:09.502744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.840 [2024-11-27 11:16:09.502753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.840 [2024-11-27 11:16:09.502761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.840 [2024-11-27 11:16:09.502769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.840 [2024-11-27 11:16:09.502808] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.840 [2024-11-27 11:16:09.502824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.840 [2024-11-27 11:16:09.502833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.840 [2024-11-27 11:16:09.502841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:40.840 [2024-11-27 11:16:09.502850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.840 [2024-11-27 11:16:09.508428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.840 [2024-11-27 11:16:09.508478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.840 [2024-11-27 11:16:09.508490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:20:40.840 [2024-11-27 11:16:09.508508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.840 [2024-11-27 11:16:09.508591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.840 [2024-11-27 11:16:09.508602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.840 [2024-11-27 11:16:09.508612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:40.840 [2024-11-27 11:16:09.508621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.840 [2024-11-27 11:16:09.509794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.954 ms, result 0 00:20:42.228  [2024-11-27T11:16:12.051Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-27T11:16:12.993Z] Copying: 40/1024 [MB] (27 MBps) [2024-11-27T11:16:13.932Z] Copying: 61/1024 [MB] (20 MBps) [2024-11-27T11:16:14.875Z] Copying: 74/1024 [MB] (12 MBps) [2024-11-27T11:16:15.823Z] Copying: 87/1024 [MB] (13 MBps) [2024-11-27T11:16:16.766Z] Copying: 103/1024 [MB] (15 MBps) [2024-11-27T11:16:17.712Z] Copying: 119/1024 [MB] (16 MBps) [2024-11-27T11:16:19.094Z] Copying: 135/1024 [MB] (15 MBps) [2024-11-27T11:16:20.037Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-27T11:16:20.982Z] Copying: 161/1024 [MB] (14 MBps) [2024-11-27T11:16:21.924Z] Copying: 181/1024 [MB] (20 MBps) [2024-11-27T11:16:22.867Z] Copying: 200/1024 [MB] (18 MBps) [2024-11-27T11:16:23.809Z] Copying: 219/1024 [MB] (19 MBps) [2024-11-27T11:16:24.752Z] Copying: 230/1024 [MB] (11 MBps) [2024-11-27T11:16:25.696Z] Copying: 248/1024 [MB] (17 MBps) [2024-11-27T11:16:27.081Z] Copying: 260/1024 [MB] (12 MBps) [2024-11-27T11:16:28.022Z] Copying: 275/1024 [MB] (14 MBps) [2024-11-27T11:16:28.964Z] Copying: 287/1024 [MB] (12 MBps) [2024-11-27T11:16:29.904Z] Copying: 299/1024 [MB] (12 MBps) [2024-11-27T11:16:30.843Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-27T11:16:31.785Z] Copying: 327/1024 [MB] (13 MBps) [2024-11-27T11:16:32.727Z] Copying: 343/1024 [MB] (15 MBps) [2024-11-27T11:16:34.112Z] Copying: 355/1024 [MB] (12 MBps) [2024-11-27T11:16:35.058Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-27T11:16:36.086Z] Copying: 381/1024 [MB] (14 MBps) [2024-11-27T11:16:37.030Z] Copying: 399/1024 [MB] (18 MBps) [2024-11-27T11:16:37.973Z] Copying: 418/1024 [MB] (19 MBps) [2024-11-27T11:16:38.913Z] Copying: 442/1024 [MB] (24 MBps) [2024-11-27T11:16:39.850Z] Copying: 462/1024 [MB] (19 MBps) [2024-11-27T11:16:40.794Z] Copying: 473/1024 [MB] (10 MBps) [2024-11-27T11:16:41.740Z] Copying: 490/1024 [MB] (16 MBps) [2024-11-27T11:16:43.125Z] Copying: 506/1024 [MB] (15 MBps) [2024-11-27T11:16:43.695Z] Copying: 519/1024 [MB] (12 MBps) [2024-11-27T11:16:45.083Z] Copying: 531/1024 [MB] (12 MBps) [2024-11-27T11:16:46.018Z] Copying: 549/1024 [MB] (18 MBps) [2024-11-27T11:16:46.959Z] Copying: 567/1024 [MB] (17 MBps) [2024-11-27T11:16:47.902Z] Copying: 589/1024 [MB] (22 MBps) [2024-11-27T11:16:48.842Z] Copying: 602/1024 [MB] (12 MBps) [2024-11-27T11:16:49.788Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-27T11:16:50.729Z] Copying: 624/1024 [MB] (11 MBps) [2024-11-27T11:16:52.114Z] Copying: 636/1024 [MB] (11 MBps) [2024-11-27T11:16:53.059Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-27T11:16:54.000Z] Copying: 658/1024 [MB] (10 MBps) [2024-11-27T11:16:54.947Z] Copying: 677/1024 [MB] (19 MBps) [2024-11-27T11:16:55.890Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-27T11:16:56.834Z] Copying: 704/1024 [MB] (16 MBps) [2024-11-27T11:16:57.779Z] Copying: 725/1024 [MB] (20 MBps) [2024-11-27T11:16:58.723Z] Copying: 752/1024 [MB] (27 MBps) [2024-11-27T11:17:00.107Z] Copying: 766/1024 [MB] (14 MBps) [2024-11-27T11:17:00.802Z] Copying: 781/1024 [MB] (15 MBps) [2024-11-27T11:17:01.743Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-27T11:17:03.131Z] Copying: 806/1024 [MB] (13 MBps) [2024-11-27T11:17:03.701Z] Copying: 820/1024 [MB] (14 MBps) [2024-11-27T11:17:05.084Z] Copying: 831/1024 [MB] (11 MBps) [2024-11-27T11:17:06.028Z] Copying: 857/1024 [MB] (25 MBps) [2024-11-27T11:17:06.971Z] Copying: 871/1024 [MB] (14 MBps) [2024-11-27T11:17:07.913Z] Copying: 887/1024 [MB] (15 MBps) [2024-11-27T11:17:08.852Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-27T11:17:09.799Z] Copying: 915/1024 [MB] (18 MBps) [2024-11-27T11:17:10.745Z] Copying: 932/1024 [MB] (16 MBps) [2024-11-27T11:17:11.691Z] Copying: 949/1024 [MB] (17 MBps) [2024-11-27T11:17:13.076Z] Copying: 965/1024 [MB] (15 MBps) [2024-11-27T11:17:14.018Z] Copying: 986/1024 [MB] (21 MBps) [2024-11-27T11:17:14.590Z] Copying: 1014/1024 [MB] (28 MBps) [2024-11-27T11:17:15.165Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 11:17:14.959939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.960073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:46.282 [2024-11-27 11:17:14.960122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:46.282 [2024-11-27 11:17:14.960145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.960212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:46.282 [2024-11-27 11:17:14.961337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.961390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:46.282 [2024-11-27 11:17:14.961418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:21:46.282 [2024-11-27 11:17:14.961433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.961724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.961737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:46.282 [2024-11-27 11:17:14.961749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:21:46.282 [2024-11-27 11:17:14.961760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.966592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.966638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:46.282 [2024-11-27 11:17:14.966650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:21:46.282 [2024-11-27 11:17:14.966667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.974800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.975117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:46.282 [2024-11-27 11:17:14.975141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.110 ms 00:21:46.282 [2024-11-27 11:17:14.975150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.978283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.978469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:46.282 [2024-11-27 11:17:14.978488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:21:46.282 [2024-11-27 11:17:14.978496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.983562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.983620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:46.282 [2024-11-27 11:17:14.983632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.945 ms 00:21:46.282 [2024-11-27 11:17:14.983640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.983769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.983779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:46.282 [2024-11-27 11:17:14.983789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:46.282 [2024-11-27 11:17:14.983797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.986820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.986870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:46.282 [2024-11-27 11:17:14.986880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:21:46.282 [2024-11-27 11:17:14.986905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.989584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.989774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:46.282 [2024-11-27 11:17:14.989793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:21:46.282 [2024-11-27 11:17:14.989800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.992126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.992171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:46.282 [2024-11-27 11:17:14.992182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:21:46.282 [2024-11-27 11:17:14.992189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.994453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.282 [2024-11-27 11:17:14.994503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:46.282 [2024-11-27 11:17:14.994513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:21:46.282 [2024-11-27 11:17:14.994520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.282 [2024-11-27 11:17:14.994561] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:46.282 [2024-11-27 11:17:14.994586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:46.282 [2024-11-27 11:17:14.994916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.994997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:46.283 [2024-11-27 11:17:14.995423] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:46.283 [2024-11-27 11:17:14.995432] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cf8b7018-7e12-468c-960c-3a6629bb4ab1 00:21:46.283 [2024-11-27 11:17:14.995440] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:46.283 [2024-11-27 11:17:14.995460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:46.283 [2024-11-27 11:17:14.995468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:46.283 [2024-11-27 11:17:14.995477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:46.283 [2024-11-27 11:17:14.995486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:46.283 [2024-11-27 11:17:14.995498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:46.283 [2024-11-27 11:17:14.995506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:46.283 [2024-11-27 11:17:14.995512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:46.283 [2024-11-27 11:17:14.995521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:46.283 [2024-11-27 11:17:14.995528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.283 [2024-11-27 11:17:14.995536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:46.284 [2024-11-27 11:17:14.995557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:21:46.284 [2024-11-27 11:17:14.995570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:14.998030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.284 [2024-11-27 11:17:14.998062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:46.284 [2024-11-27 11:17:14.998073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:21:46.284 [2024-11-27 11:17:14.998083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:14.998227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.284 [2024-11-27 11:17:14.998247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:46.284 [2024-11-27 11:17:14.998257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:46.284 [2024-11-27 11:17:14.998265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.005157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.005209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.284 [2024-11-27 11:17:15.005221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.005229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.005286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.005302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.284 [2024-11-27 11:17:15.005310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.005318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.005372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.005383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.284 [2024-11-27 11:17:15.005392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.005401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.005417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.005426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.284 [2024-11-27 11:17:15.005438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.005447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.018815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.018870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.284 [2024-11-27 11:17:15.018882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.018910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.028853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.028994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.284 [2024-11-27 11:17:15.029016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.029087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.284 [2024-11-27 11:17:15.029102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.029162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.284 [2024-11-27 11:17:15.029171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.029266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.284 [2024-11-27 11:17:15.029275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.029327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:46.284 [2024-11-27 11:17:15.029335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.029397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.284 [2024-11-27 11:17:15.029405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.284 [2024-11-27 11:17:15.029468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.284 [2024-11-27 11:17:15.029477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.284 [2024-11-27 11:17:15.029488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.284 [2024-11-27 11:17:15.029622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.712 ms, result 0 00:21:46.546 00:21:46.546 00:21:46.546 11:17:15 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:48.462 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:48.462 11:17:17 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:48.722 [2024-11-27 11:17:17.388852] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:48.722 [2024-11-27 11:17:17.389005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88066 ] 00:21:48.722 [2024-11-27 11:17:17.537553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.722 [2024-11-27 11:17:17.573692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.985 [2024-11-27 11:17:17.665651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.985 [2024-11-27 11:17:17.665726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.985 [2024-11-27 11:17:17.822965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.823145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.985 [2024-11-27 11:17:17.823171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.985 [2024-11-27 11:17:17.823180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.823236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.823246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.985 [2024-11-27 11:17:17.823254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:48.985 [2024-11-27 11:17:17.823261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.823281] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.985 [2024-11-27 11:17:17.823518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.985 [2024-11-27 11:17:17.823532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.823540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.985 [2024-11-27 11:17:17.823554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:21:48.985 [2024-11-27 11:17:17.823564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.824777] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:48.985 [2024-11-27 11:17:17.827738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.827783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:48.985 [2024-11-27 11:17:17.827794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:21:48.985 [2024-11-27 11:17:17.827801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.827861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.827871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:48.985 [2024-11-27 11:17:17.827884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:48.985 [2024-11-27 11:17:17.827910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.833677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.833718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.985 [2024-11-27 11:17:17.833729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.703 ms 00:21:48.985 [2024-11-27 11:17:17.833740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.833823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.833832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.985 [2024-11-27 11:17:17.833839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:48.985 [2024-11-27 11:17:17.833847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.833907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.833921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.985 [2024-11-27 11:17:17.833929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:48.985 [2024-11-27 11:17:17.833937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.833963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.985 [2024-11-27 11:17:17.835500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.835531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.985 [2024-11-27 11:17:17.835540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:21:48.985 [2024-11-27 11:17:17.835553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.835587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.835594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.985 [2024-11-27 11:17:17.835602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:48.985 [2024-11-27 11:17:17.835609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.835628] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:48.985 [2024-11-27 11:17:17.835650] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:48.985 [2024-11-27 11:17:17.835691] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:48.985 [2024-11-27 11:17:17.835709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:48.985 [2024-11-27 11:17:17.835814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:48.985 [2024-11-27 11:17:17.835824] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.985 [2024-11-27 11:17:17.835838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:48.985 [2024-11-27 11:17:17.835848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.985 [2024-11-27 11:17:17.835862] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.985 [2024-11-27 11:17:17.835870] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:48.985 [2024-11-27 11:17:17.835877] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.985 [2024-11-27 11:17:17.835924] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:48.985 [2024-11-27 11:17:17.835932] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:48.985 [2024-11-27 11:17:17.835940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.835947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.985 [2024-11-27 11:17:17.835954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:21:48.985 [2024-11-27 11:17:17.835962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.985 [2024-11-27 11:17:17.836045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.985 [2024-11-27 11:17:17.836058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.986 [2024-11-27 11:17:17.836066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:48.986 [2024-11-27 11:17:17.836073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.986 [2024-11-27 11:17:17.836173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.986 [2024-11-27 11:17:17.836184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.986 [2024-11-27 11:17:17.836200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.986 [2024-11-27 11:17:17.836231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.986 [2024-11-27 11:17:17.836255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.986 [2024-11-27 11:17:17.836272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.986 [2024-11-27 11:17:17.836280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:48.986 [2024-11-27 11:17:17.836287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.986 [2024-11-27 11:17:17.836297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.986 [2024-11-27 11:17:17.836305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:48.986 [2024-11-27 11:17:17.836312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.986 [2024-11-27 11:17:17.836327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.986 [2024-11-27 11:17:17.836349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.986 [2024-11-27 11:17:17.836371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.986 [2024-11-27 11:17:17.836392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.986 [2024-11-27 11:17:17.836419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.986 [2024-11-27 11:17:17.836441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.986 [2024-11-27 11:17:17.836456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.986 [2024-11-27 11:17:17.836463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:48.986 [2024-11-27 11:17:17.836470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.986 [2024-11-27 11:17:17.836477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:48.986 [2024-11-27 11:17:17.836484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:48.986 [2024-11-27 11:17:17.836492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:48.986 [2024-11-27 11:17:17.836508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:48.986 [2024-11-27 11:17:17.836517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836524] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.986 [2024-11-27 11:17:17.836531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.986 [2024-11-27 11:17:17.836541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.986 [2024-11-27 11:17:17.836557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.986 [2024-11-27 11:17:17.836564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.986 [2024-11-27 11:17:17.836570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.986 [2024-11-27 11:17:17.836577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.986 [2024-11-27 11:17:17.836584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.986 [2024-11-27 11:17:17.836590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.986 [2024-11-27 11:17:17.836598] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.986 [2024-11-27 11:17:17.836607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.986 [2024-11-27 11:17:17.836615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:48.986 [2024-11-27 11:17:17.836622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:48.986 [2024-11-27 11:17:17.836630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:48.986 [2024-11-27 11:17:17.836636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:48.986 [2024-11-27 11:17:17.836643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:48.986 [2024-11-27 11:17:17.836651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:48.986 [2024-11-27 11:17:17.836660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:48.986 [2024-11-27 11:17:17.836666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:48.986 [2024-11-27 11:17:17.836673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:48.986 [2024-11-27 11:17:17.836680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:48.986 [2024-11-27 11:17:17.836687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:48.986 [2024-11-27 11:17:17.836694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:48.986 [2024-11-27 11:17:17.836701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:48.986 [2024-11-27 11:17:17.836708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:48.986 [2024-11-27 11:17:17.836717] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.986 [2024-11-27 11:17:17.836728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.987 [2024-11-27 11:17:17.836735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.987 [2024-11-27 11:17:17.836742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.987 [2024-11-27 11:17:17.836749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.987 [2024-11-27 11:17:17.836758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.987 [2024-11-27 11:17:17.836765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.987 [2024-11-27 11:17:17.836772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.987 [2024-11-27 11:17:17.836782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:21:48.987 [2024-11-27 11:17:17.836798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.987 [2024-11-27 11:17:17.856803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.987 [2024-11-27 11:17:17.857047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.987 [2024-11-27 11:17:17.857082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.951 ms 00:21:48.987 [2024-11-27 11:17:17.857102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.987 [2024-11-27 11:17:17.857232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.987 [2024-11-27 11:17:17.857251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.987 [2024-11-27 11:17:17.857268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:48.987 [2024-11-27 11:17:17.857279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.867400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.867439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.249 [2024-11-27 11:17:17.867449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.043 ms 00:21:49.249 [2024-11-27 11:17:17.867457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.867487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.867495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.249 [2024-11-27 11:17:17.867504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:49.249 [2024-11-27 11:17:17.867512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.867930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.867956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.249 [2024-11-27 11:17:17.867965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:21:49.249 [2024-11-27 11:17:17.867973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.868104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.868151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.249 [2024-11-27 11:17:17.868163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:21:49.249 [2024-11-27 11:17:17.868171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.873397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.873438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.249 [2024-11-27 11:17:17.873451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:21:49.249 [2024-11-27 11:17:17.873459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.876488] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:49.249 [2024-11-27 11:17:17.876526] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:49.249 [2024-11-27 11:17:17.876540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.876548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:49.249 [2024-11-27 11:17:17.876556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:21:49.249 [2024-11-27 11:17:17.876563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.891535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.891667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:49.249 [2024-11-27 11:17:17.891693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.930 ms 00:21:49.249 [2024-11-27 11:17:17.891700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.894069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.894100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:49.249 [2024-11-27 11:17:17.894109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:21:49.249 [2024-11-27 11:17:17.894116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.896104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.896139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:49.249 [2024-11-27 11:17:17.896148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.953 ms 00:21:49.249 [2024-11-27 11:17:17.896155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.896485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.896497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:49.249 [2024-11-27 11:17:17.896505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:21:49.249 [2024-11-27 11:17:17.896517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.915997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.916061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:49.249 [2024-11-27 11:17:17.916073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.464 ms 00:21:49.249 [2024-11-27 11:17:17.916081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.923806] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:49.249 [2024-11-27 11:17:17.926473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.926609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:49.249 [2024-11-27 11:17:17.926631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.347 ms 00:21:49.249 [2024-11-27 11:17:17.926639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.926731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.926743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:49.249 [2024-11-27 11:17:17.926752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:49.249 [2024-11-27 11:17:17.926763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.926827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.926837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:49.249 [2024-11-27 11:17:17.926846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:49.249 [2024-11-27 11:17:17.926856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.926874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.926882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:49.249 [2024-11-27 11:17:17.926906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.249 [2024-11-27 11:17:17.926914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.926947] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:49.249 [2024-11-27 11:17:17.926957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.926967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:49.249 [2024-11-27 11:17:17.926975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:49.249 [2024-11-27 11:17:17.926987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.931556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.931596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:49.249 [2024-11-27 11:17:17.931606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.550 ms 00:21:49.249 [2024-11-27 11:17:17.931614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.931693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.249 [2024-11-27 11:17:17.931703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:49.249 [2024-11-27 11:17:17.931711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:49.249 [2024-11-27 11:17:17.931719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.249 [2024-11-27 11:17:17.932704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.322 ms, result 0 00:21:50.194  [2024-11-27T11:17:20.023Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-27T11:17:20.966Z] Copying: 27/1024 [MB] (15 MBps) [2024-11-27T11:17:22.356Z] Copying: 43/1024 [MB] (15 MBps) [2024-11-27T11:17:23.301Z] Copying: 60/1024 [MB] (16 MBps) [2024-11-27T11:17:24.245Z] Copying: 75/1024 [MB] (15 MBps) [2024-11-27T11:17:25.190Z] Copying: 90/1024 [MB] (14 MBps) [2024-11-27T11:17:26.136Z] Copying: 109/1024 [MB] (19 MBps) [2024-11-27T11:17:27.079Z] Copying: 125/1024 [MB] (15 MBps) [2024-11-27T11:17:28.025Z] Copying: 143/1024 [MB] (17 MBps) [2024-11-27T11:17:28.968Z] Copying: 162/1024 [MB] (19 MBps) [2024-11-27T11:17:30.353Z] Copying: 180/1024 [MB] (17 MBps) [2024-11-27T11:17:31.299Z] Copying: 194/1024 [MB] (13 MBps) [2024-11-27T11:17:32.247Z] Copying: 212/1024 [MB] (17 MBps) [2024-11-27T11:17:33.192Z] Copying: 228/1024 [MB] (15 MBps) [2024-11-27T11:17:34.212Z] Copying: 238/1024 [MB] (10 MBps) [2024-11-27T11:17:35.218Z] Copying: 248/1024 [MB] (10 MBps) [2024-11-27T11:17:36.163Z] Copying: 258/1024 [MB] (10 MBps) [2024-11-27T11:17:37.108Z] Copying: 274568/1048576 [kB] (10156 kBps) [2024-11-27T11:17:38.053Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-27T11:17:38.998Z] Copying: 293/1024 [MB] (14 MBps) [2024-11-27T11:17:40.384Z] Copying: 303/1024 [MB] (10 MBps) [2024-11-27T11:17:40.956Z] Copying: 320796/1048576 [kB] (10096 kBps) [2024-11-27T11:17:42.343Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-27T11:17:43.287Z] Copying: 341080/1048576 [kB] (10024 kBps) [2024-11-27T11:17:44.231Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-27T11:17:45.176Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-27T11:17:46.116Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-27T11:17:47.062Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-27T11:17:48.004Z] Copying: 385/1024 [MB] (10 MBps) [2024-11-27T11:17:49.381Z] Copying: 405/1024 [MB] (20 MBps) [2024-11-27T11:17:49.953Z] Copying: 444/1024 [MB] (38 MBps) [2024-11-27T11:17:51.342Z] Copying: 467/1024 [MB] (23 MBps) [2024-11-27T11:17:52.286Z] Copying: 478/1024 [MB] (11 MBps) [2024-11-27T11:17:53.231Z] Copying: 493/1024 [MB] (14 MBps) [2024-11-27T11:17:54.175Z] Copying: 508/1024 [MB] (15 MBps) [2024-11-27T11:17:55.119Z] Copying: 531420/1048576 [kB] (10208 kBps) [2024-11-27T11:17:56.062Z] Copying: 541432/1048576 [kB] (10012 kBps) [2024-11-27T11:17:57.006Z] Copying: 547/1024 [MB] (19 MBps) [2024-11-27T11:17:57.947Z] Copying: 566/1024 [MB] (18 MBps) [2024-11-27T11:17:59.329Z] Copying: 593/1024 [MB] (27 MBps) [2024-11-27T11:18:00.273Z] Copying: 614/1024 [MB] (20 MBps) [2024-11-27T11:18:01.215Z] Copying: 635/1024 [MB] (21 MBps) [2024-11-27T11:18:02.155Z] Copying: 657/1024 [MB] (21 MBps) [2024-11-27T11:18:03.097Z] Copying: 670/1024 [MB] (13 MBps) [2024-11-27T11:18:04.040Z] Copying: 682/1024 [MB] (12 MBps) [2024-11-27T11:18:05.001Z] Copying: 699/1024 [MB] (17 MBps) [2024-11-27T11:18:05.992Z] Copying: 719/1024 [MB] (19 MBps) [2024-11-27T11:18:07.380Z] Copying: 736/1024 [MB] (16 MBps) [2024-11-27T11:18:07.953Z] Copying: 750/1024 [MB] (14 MBps) [2024-11-27T11:18:09.341Z] Copying: 767/1024 [MB] (17 MBps) [2024-11-27T11:18:10.284Z] Copying: 786/1024 [MB] (18 MBps) [2024-11-27T11:18:11.228Z] Copying: 799/1024 [MB] (13 MBps) [2024-11-27T11:18:12.172Z] Copying: 815/1024 [MB] (16 MBps) [2024-11-27T11:18:13.115Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-27T11:18:14.059Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-27T11:18:15.002Z] Copying: 857/1024 [MB] (20 MBps) [2024-11-27T11:18:15.946Z] Copying: 868/1024 [MB] (10 MBps) [2024-11-27T11:18:17.333Z] Copying: 885/1024 [MB] (17 MBps) [2024-11-27T11:18:18.274Z] Copying: 904/1024 [MB] (18 MBps) [2024-11-27T11:18:19.218Z] Copying: 921/1024 [MB] (16 MBps) [2024-11-27T11:18:20.162Z] Copying: 939/1024 [MB] (17 MBps) [2024-11-27T11:18:21.107Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-27T11:18:22.047Z] Copying: 960/1024 [MB] (10 MBps) [2024-11-27T11:18:22.986Z] Copying: 970/1024 [MB] (10 MBps) [2024-11-27T11:18:24.365Z] Copying: 982/1024 [MB] (12 MBps) [2024-11-27T11:18:25.311Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-27T11:18:26.258Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-27T11:18:26.828Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-27T11:18:26.828Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 11:18:26.710655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.945 [2024-11-27 11:18:26.710768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:57.945 [2024-11-27 11:18:26.710786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:57.945 [2024-11-27 11:18:26.710801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.945 [2024-11-27 11:18:26.714203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:57.945 [2024-11-27 11:18:26.715972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.945 [2024-11-27 11:18:26.716143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:57.945 [2024-11-27 11:18:26.716210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:22:57.945 [2024-11-27 11:18:26.716238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.945 [2024-11-27 11:18:26.729112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.945 [2024-11-27 11:18:26.729281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:57.945 [2024-11-27 11:18:26.729401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.279 ms 00:22:57.945 [2024-11-27 11:18:26.729427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.945 [2024-11-27 11:18:26.749536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.945 [2024-11-27 11:18:26.749701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:57.945 [2024-11-27 11:18:26.749801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.071 ms 00:22:57.945 [2024-11-27 11:18:26.749824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.945 [2024-11-27 11:18:26.754720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.945 [2024-11-27 11:18:26.754868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:57.945 [2024-11-27 11:18:26.754953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:22:57.945 [2024-11-27 11:18:26.755041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.945 [2024-11-27 11:18:26.756422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.945 [2024-11-27 11:18:26.756557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:57.946 [2024-11-27 11:18:26.756571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:22:57.946 [2024-11-27 11:18:26.756580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.760586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.946 [2024-11-27 11:18:26.760708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:57.946 [2024-11-27 11:18:26.760769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.973 ms 00:22:57.946 [2024-11-27 11:18:26.760819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.807497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.946 [2024-11-27 11:18:26.807600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:57.946 [2024-11-27 11:18:26.807648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.620 ms 00:22:57.946 [2024-11-27 11:18:26.807667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.809214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.946 [2024-11-27 11:18:26.809318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:57.946 [2024-11-27 11:18:26.809363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:22:57.946 [2024-11-27 11:18:26.809380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.810552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.946 [2024-11-27 11:18:26.810654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:57.946 [2024-11-27 11:18:26.810700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:22:57.946 [2024-11-27 11:18:26.810717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.811578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.946 [2024-11-27 11:18:26.811681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:57.946 [2024-11-27 11:18:26.811724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:22:57.946 [2024-11-27 11:18:26.811742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.812562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.946 [2024-11-27 11:18:26.812663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:57.946 [2024-11-27 11:18:26.812710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:22:57.946 [2024-11-27 11:18:26.812729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.946 [2024-11-27 11:18:26.812760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:57.946 [2024-11-27 11:18:26.812783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108032 / 261120 wr_cnt: 1 state: open 00:22:57.946 [2024-11-27 11:18:26.812809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.812833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.812909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.813955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.814979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:57.946 [2024-11-27 11:18:26.815338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.815974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.816018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.816043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.816066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.816116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.816140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:57.947 [2024-11-27 11:18:26.816171] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:57.947 [2024-11-27 11:18:26.816209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cf8b7018-7e12-468c-960c-3a6629bb4ab1 00:22:57.947 [2024-11-27 11:18:26.816235] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108032 00:22:57.947 [2024-11-27 11:18:26.816299] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108992 00:22:57.947 [2024-11-27 11:18:26.816342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108032 00:22:57.947 [2024-11-27 11:18:26.816360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:22:57.947 [2024-11-27 11:18:26.816366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:57.947 [2024-11-27 11:18:26.816373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:57.947 [2024-11-27 11:18:26.816379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:57.947 [2024-11-27 11:18:26.816385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:57.947 [2024-11-27 11:18:26.816391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:57.947 [2024-11-27 11:18:26.816397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.947 [2024-11-27 11:18:26.816404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:57.947 [2024-11-27 11:18:26.816410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:22:57.947 [2024-11-27 11:18:26.816416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.947 [2024-11-27 11:18:26.817909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.947 [2024-11-27 11:18:26.817937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:57.947 [2024-11-27 11:18:26.817944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:22:57.947 [2024-11-27 11:18:26.817950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.947 [2024-11-27 11:18:26.818027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.947 [2024-11-27 11:18:26.818035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:57.947 [2024-11-27 11:18:26.818041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:57.947 [2024-11-27 11:18:26.818047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.947 [2024-11-27 11:18:26.822382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.947 [2024-11-27 11:18:26.822415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.947 [2024-11-27 11:18:26.822423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.947 [2024-11-27 11:18:26.822428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.947 [2024-11-27 11:18:26.822469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.947 [2024-11-27 11:18:26.822477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.947 [2024-11-27 11:18:26.822484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.947 [2024-11-27 11:18:26.822489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.947 [2024-11-27 11:18:26.822530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.947 [2024-11-27 11:18:26.822541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.947 [2024-11-27 11:18:26.822547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.947 [2024-11-27 11:18:26.822553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.947 [2024-11-27 11:18:26.822564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.947 [2024-11-27 11:18:26.822571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.947 [2024-11-27 11:18:26.822577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.947 [2024-11-27 11:18:26.822582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.207 [2024-11-27 11:18:26.831153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.207 [2024-11-27 11:18:26.831191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.207 [2024-11-27 11:18:26.831199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.207 [2024-11-27 11:18:26.831205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.207 [2024-11-27 11:18:26.838031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.207 [2024-11-27 11:18:26.838065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.208 [2024-11-27 11:18:26.838073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.208 [2024-11-27 11:18:26.838113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.208 [2024-11-27 11:18:26.838122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.208 [2024-11-27 11:18:26.838190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.208 [2024-11-27 11:18:26.838196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.208 [2024-11-27 11:18:26.838263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.208 [2024-11-27 11:18:26.838271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.208 [2024-11-27 11:18:26.838304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:58.208 [2024-11-27 11:18:26.838310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.208 [2024-11-27 11:18:26.838349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.208 [2024-11-27 11:18:26.838355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.208 [2024-11-27 11:18:26.838406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.208 [2024-11-27 11:18:26.838413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.208 [2024-11-27 11:18:26.838419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.208 [2024-11-27 11:18:26.838513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 128.412 ms, result 0 00:22:58.778 00:22:58.778 00:22:58.778 11:18:27 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:58.778 [2024-11-27 11:18:27.609818] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:58.778 [2024-11-27 11:18:27.610183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88790 ] 00:22:59.037 [2024-11-27 11:18:27.763438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.037 [2024-11-27 11:18:27.805978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.037 [2024-11-27 11:18:27.890863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.037 [2024-11-27 11:18:27.890922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.299 [2024-11-27 11:18:28.046537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.046575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:59.299 [2024-11-27 11:18:28.046589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:59.299 [2024-11-27 11:18:28.046601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.046649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.046662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.299 [2024-11-27 11:18:28.046670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:59.299 [2024-11-27 11:18:28.046677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.046694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:59.299 [2024-11-27 11:18:28.046944] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:59.299 [2024-11-27 11:18:28.046968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.046979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.299 [2024-11-27 11:18:28.046987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:59.299 [2024-11-27 11:18:28.046996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.048082] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:59.299 [2024-11-27 11:18:28.050437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.050465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:59.299 [2024-11-27 11:18:28.050474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.358 ms 00:22:59.299 [2024-11-27 11:18:28.050487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.050541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.050552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:59.299 [2024-11-27 11:18:28.050560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:59.299 [2024-11-27 11:18:28.050567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.055276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.055302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.299 [2024-11-27 11:18:28.055310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:22:59.299 [2024-11-27 11:18:28.055322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.055396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.055405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.299 [2024-11-27 11:18:28.055413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:59.299 [2024-11-27 11:18:28.055420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.055464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.055473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:59.299 [2024-11-27 11:18:28.055481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:59.299 [2024-11-27 11:18:28.055494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.055521] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:59.299 [2024-11-27 11:18:28.056798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.056820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.299 [2024-11-27 11:18:28.056829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:22:59.299 [2024-11-27 11:18:28.056835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.056869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.056877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:59.299 [2024-11-27 11:18:28.056885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:59.299 [2024-11-27 11:18:28.056910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.056933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:59.299 [2024-11-27 11:18:28.056954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:59.299 [2024-11-27 11:18:28.056989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:59.299 [2024-11-27 11:18:28.057003] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:59.299 [2024-11-27 11:18:28.057104] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:59.299 [2024-11-27 11:18:28.057116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:59.299 [2024-11-27 11:18:28.057127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:59.299 [2024-11-27 11:18:28.057137] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057148] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057156] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:59.299 [2024-11-27 11:18:28.057166] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:59.299 [2024-11-27 11:18:28.057173] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:59.299 [2024-11-27 11:18:28.057181] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:59.299 [2024-11-27 11:18:28.057189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.057195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:59.299 [2024-11-27 11:18:28.057203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:22:59.299 [2024-11-27 11:18:28.057209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.057291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.299 [2024-11-27 11:18:28.057301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:59.299 [2024-11-27 11:18:28.057308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:59.299 [2024-11-27 11:18:28.057318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.299 [2024-11-27 11:18:28.057412] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:59.299 [2024-11-27 11:18:28.057421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:59.299 [2024-11-27 11:18:28.057429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:59.299 [2024-11-27 11:18:28.057457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:59.299 [2024-11-27 11:18:28.057480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.299 [2024-11-27 11:18:28.057495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:59.299 [2024-11-27 11:18:28.057502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:59.299 [2024-11-27 11:18:28.057509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.299 [2024-11-27 11:18:28.057517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:59.299 [2024-11-27 11:18:28.057525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:59.299 [2024-11-27 11:18:28.057535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:59.299 [2024-11-27 11:18:28.057550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:59.299 [2024-11-27 11:18:28.057572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:59.299 [2024-11-27 11:18:28.057595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:59.299 [2024-11-27 11:18:28.057617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:59.299 [2024-11-27 11:18:28.057624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.299 [2024-11-27 11:18:28.057632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:59.300 [2024-11-27 11:18:28.057640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:59.300 [2024-11-27 11:18:28.057647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.300 [2024-11-27 11:18:28.057657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:59.300 [2024-11-27 11:18:28.057664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:59.300 [2024-11-27 11:18:28.057672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.300 [2024-11-27 11:18:28.057679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:59.300 [2024-11-27 11:18:28.057686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:59.300 [2024-11-27 11:18:28.057693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.300 [2024-11-27 11:18:28.057701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:59.300 [2024-11-27 11:18:28.057708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:59.300 [2024-11-27 11:18:28.057715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.300 [2024-11-27 11:18:28.057723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:59.300 [2024-11-27 11:18:28.057730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:59.300 [2024-11-27 11:18:28.057737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.300 [2024-11-27 11:18:28.057745] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:59.300 [2024-11-27 11:18:28.057753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:59.300 [2024-11-27 11:18:28.057761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.300 [2024-11-27 11:18:28.057772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.300 [2024-11-27 11:18:28.057782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:59.300 [2024-11-27 11:18:28.057790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:59.300 [2024-11-27 11:18:28.057798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:59.300 [2024-11-27 11:18:28.057805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:59.300 [2024-11-27 11:18:28.057813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:59.300 [2024-11-27 11:18:28.057821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:59.300 [2024-11-27 11:18:28.057829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:59.300 [2024-11-27 11:18:28.057837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:59.300 [2024-11-27 11:18:28.057857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:59.300 [2024-11-27 11:18:28.057864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:59.300 [2024-11-27 11:18:28.057871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:59.300 [2024-11-27 11:18:28.057878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:59.300 [2024-11-27 11:18:28.057885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:59.300 [2024-11-27 11:18:28.057903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:59.300 [2024-11-27 11:18:28.057910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:59.300 [2024-11-27 11:18:28.057919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:59.300 [2024-11-27 11:18:28.057926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:59.300 [2024-11-27 11:18:28.057960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:59.300 [2024-11-27 11:18:28.057969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:59.300 [2024-11-27 11:18:28.057985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:59.300 [2024-11-27 11:18:28.057992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:59.300 [2024-11-27 11:18:28.058000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:59.300 [2024-11-27 11:18:28.058008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.058015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:59.300 [2024-11-27 11:18:28.058022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:22:59.300 [2024-11-27 11:18:28.058033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.076319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.076380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.300 [2024-11-27 11:18:28.076404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.243 ms 00:22:59.300 [2024-11-27 11:18:28.076421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.076553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.076570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:59.300 [2024-11-27 11:18:28.076583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:59.300 [2024-11-27 11:18:28.076595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.086286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.086319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.300 [2024-11-27 11:18:28.086328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.610 ms 00:22:59.300 [2024-11-27 11:18:28.086335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.086365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.086373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.300 [2024-11-27 11:18:28.086381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:59.300 [2024-11-27 11:18:28.086392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.086704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.086732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.300 [2024-11-27 11:18:28.086742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:59.300 [2024-11-27 11:18:28.086749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.086871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.086879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.300 [2024-11-27 11:18:28.086910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:59.300 [2024-11-27 11:18:28.086923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.091333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.091359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.300 [2024-11-27 11:18:28.091372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.379 ms 00:22:59.300 [2024-11-27 11:18:28.091379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.094053] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:59.300 [2024-11-27 11:18:28.094083] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:59.300 [2024-11-27 11:18:28.094093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.094100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:59.300 [2024-11-27 11:18:28.094108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:22:59.300 [2024-11-27 11:18:28.094120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.108414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.108440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:59.300 [2024-11-27 11:18:28.108456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.257 ms 00:22:59.300 [2024-11-27 11:18:28.108463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.110470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.110495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:59.300 [2024-11-27 11:18:28.110504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.968 ms 00:22:59.300 [2024-11-27 11:18:28.110511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.300 [2024-11-27 11:18:28.112161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.300 [2024-11-27 11:18:28.112186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:59.300 [2024-11-27 11:18:28.112194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:22:59.300 [2024-11-27 11:18:28.112200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.112503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.112518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:59.301 [2024-11-27 11:18:28.112527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:22:59.301 [2024-11-27 11:18:28.112535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.128149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.128195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:59.301 [2024-11-27 11:18:28.128205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.599 ms 00:22:59.301 [2024-11-27 11:18:28.128213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.135698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:59.301 [2024-11-27 11:18:28.138030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.138063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:59.301 [2024-11-27 11:18:28.138077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.779 ms 00:22:59.301 [2024-11-27 11:18:28.138089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.138135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.138146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:59.301 [2024-11-27 11:18:28.138155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:59.301 [2024-11-27 11:18:28.138163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.139555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.139583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:59.301 [2024-11-27 11:18:28.139592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:22:59.301 [2024-11-27 11:18:28.139602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.139626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.139633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:59.301 [2024-11-27 11:18:28.139641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:59.301 [2024-11-27 11:18:28.139648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.139702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:59.301 [2024-11-27 11:18:28.139712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.139719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:59.301 [2024-11-27 11:18:28.139727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:59.301 [2024-11-27 11:18:28.139734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.143595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.143632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:59.301 [2024-11-27 11:18:28.143642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.842 ms 00:22:59.301 [2024-11-27 11:18:28.143650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.143716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.301 [2024-11-27 11:18:28.143726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:59.301 [2024-11-27 11:18:28.143734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:59.301 [2024-11-27 11:18:28.143741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.301 [2024-11-27 11:18:28.144620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.695 ms, result 0 00:23:00.688  [2024-11-27T11:18:30.512Z] Copying: 9960/1048576 [kB] (9960 kBps) [2024-11-27T11:18:31.456Z] Copying: 30/1024 [MB] (20 MBps) [2024-11-27T11:18:32.404Z] Copying: 51/1024 [MB] (20 MBps) [2024-11-27T11:18:33.351Z] Copying: 73/1024 [MB] (22 MBps) [2024-11-27T11:18:34.738Z] Copying: 91/1024 [MB] (17 MBps) [2024-11-27T11:18:35.684Z] Copying: 111/1024 [MB] (20 MBps) [2024-11-27T11:18:36.630Z] Copying: 133/1024 [MB] (22 MBps) [2024-11-27T11:18:37.634Z] Copying: 146/1024 [MB] (13 MBps) [2024-11-27T11:18:38.580Z] Copying: 160/1024 [MB] (13 MBps) [2024-11-27T11:18:39.527Z] Copying: 173/1024 [MB] (13 MBps) [2024-11-27T11:18:40.472Z] Copying: 187/1024 [MB] (13 MBps) [2024-11-27T11:18:41.420Z] Copying: 198/1024 [MB] (10 MBps) [2024-11-27T11:18:42.365Z] Copying: 210/1024 [MB] (11 MBps) [2024-11-27T11:18:43.755Z] Copying: 222/1024 [MB] (12 MBps) [2024-11-27T11:18:44.327Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-27T11:18:45.714Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-27T11:18:46.657Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-27T11:18:47.601Z] Copying: 269/1024 [MB] (12 MBps) [2024-11-27T11:18:48.545Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-27T11:18:49.488Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-27T11:18:50.431Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-27T11:18:51.376Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-27T11:18:52.765Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-27T11:18:53.339Z] Copying: 342/1024 [MB] (15 MBps) [2024-11-27T11:18:54.727Z] Copying: 354/1024 [MB] (12 MBps) [2024-11-27T11:18:55.672Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-27T11:18:56.618Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-27T11:18:57.564Z] Copying: 387/1024 [MB] (10 MBps) [2024-11-27T11:18:58.510Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-27T11:18:59.453Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-27T11:19:00.398Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-27T11:19:01.342Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-27T11:19:02.729Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-27T11:19:03.674Z] Copying: 463/1024 [MB] (21 MBps) [2024-11-27T11:19:04.616Z] Copying: 475/1024 [MB] (12 MBps) [2024-11-27T11:19:05.558Z] Copying: 500/1024 [MB] (24 MBps) [2024-11-27T11:19:06.500Z] Copying: 513/1024 [MB] (13 MBps) [2024-11-27T11:19:07.444Z] Copying: 532/1024 [MB] (19 MBps) [2024-11-27T11:19:08.386Z] Copying: 554/1024 [MB] (21 MBps) [2024-11-27T11:19:09.381Z] Copying: 572/1024 [MB] (17 MBps) [2024-11-27T11:19:10.326Z] Copying: 591/1024 [MB] (19 MBps) [2024-11-27T11:19:11.714Z] Copying: 606/1024 [MB] (15 MBps) [2024-11-27T11:19:12.658Z] Copying: 624/1024 [MB] (17 MBps) [2024-11-27T11:19:13.603Z] Copying: 643/1024 [MB] (19 MBps) [2024-11-27T11:19:14.548Z] Copying: 657/1024 [MB] (13 MBps) [2024-11-27T11:19:15.494Z] Copying: 675/1024 [MB] (18 MBps) [2024-11-27T11:19:16.440Z] Copying: 692/1024 [MB] (16 MBps) [2024-11-27T11:19:17.384Z] Copying: 705/1024 [MB] (13 MBps) [2024-11-27T11:19:18.328Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-27T11:19:19.710Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-27T11:19:20.655Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-27T11:19:21.600Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-27T11:19:22.545Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-27T11:19:23.489Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-27T11:19:24.436Z] Copying: 780/1024 [MB] (10 MBps) [2024-11-27T11:19:25.380Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-27T11:19:26.325Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-27T11:19:27.710Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-27T11:19:28.650Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-27T11:19:29.595Z] Copying: 846/1024 [MB] (24 MBps) [2024-11-27T11:19:30.539Z] Copying: 858/1024 [MB] (11 MBps) [2024-11-27T11:19:31.484Z] Copying: 874/1024 [MB] (15 MBps) [2024-11-27T11:19:32.428Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-27T11:19:33.374Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-27T11:19:34.763Z] Copying: 919/1024 [MB] (22 MBps) [2024-11-27T11:19:35.336Z] Copying: 930/1024 [MB] (11 MBps) [2024-11-27T11:19:36.722Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-27T11:19:37.663Z] Copying: 955/1024 [MB] (13 MBps) [2024-11-27T11:19:38.605Z] Copying: 969/1024 [MB] (13 MBps) [2024-11-27T11:19:39.549Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-27T11:19:40.549Z] Copying: 1000/1024 [MB] (19 MBps) [2024-11-27T11:19:40.549Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 11:19:40.401932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.666 [2024-11-27 11:19:40.402031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:11.666 [2024-11-27 11:19:40.402053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:11.666 [2024-11-27 11:19:40.402066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.666 [2024-11-27 11:19:40.402098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:11.666 [2024-11-27 11:19:40.402982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.667 [2024-11-27 11:19:40.403034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:11.667 [2024-11-27 11:19:40.403049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:24:11.667 [2024-11-27 11:19:40.403060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.667 [2024-11-27 11:19:40.403430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.667 [2024-11-27 11:19:40.403449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:11.667 [2024-11-27 11:19:40.403464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:24:11.667 [2024-11-27 11:19:40.403478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.667 [2024-11-27 11:19:40.411451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.667 [2024-11-27 11:19:40.411501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:11.667 [2024-11-27 11:19:40.411516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.939 ms 00:24:11.667 [2024-11-27 11:19:40.411528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.667 [2024-11-27 11:19:40.418818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.667 [2024-11-27 11:19:40.418870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:11.667 [2024-11-27 11:19:40.418881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.239 ms 00:24:11.667 [2024-11-27 11:19:40.418899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.667 [2024-11-27 11:19:40.421054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.667 [2024-11-27 11:19:40.421101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:11.667 [2024-11-27 11:19:40.421111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:24:11.667 [2024-11-27 11:19:40.421119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.667 [2024-11-27 11:19:40.425224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.667 [2024-11-27 11:19:40.425270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:11.667 [2024-11-27 11:19:40.425282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:24:11.667 [2024-11-27 11:19:40.425291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.929 [2024-11-27 11:19:40.686242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.929 [2024-11-27 11:19:40.686295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:11.929 [2024-11-27 11:19:40.686307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 260.901 ms 00:24:11.929 [2024-11-27 11:19:40.686316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.929 [2024-11-27 11:19:40.688769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.929 [2024-11-27 11:19:40.688812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:11.929 [2024-11-27 11:19:40.688846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:24:11.929 [2024-11-27 11:19:40.688854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.929 [2024-11-27 11:19:40.690722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.929 [2024-11-27 11:19:40.690783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:11.929 [2024-11-27 11:19:40.690793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:24:11.929 [2024-11-27 11:19:40.690800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.929 [2024-11-27 11:19:40.692366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.929 [2024-11-27 11:19:40.692408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:11.929 [2024-11-27 11:19:40.692418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:24:11.929 [2024-11-27 11:19:40.692427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.929 [2024-11-27 11:19:40.694658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.929 [2024-11-27 11:19:40.694698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:11.929 [2024-11-27 11:19:40.694707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:24:11.929 [2024-11-27 11:19:40.694716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.929 [2024-11-27 11:19:40.694754] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:11.929 [2024-11-27 11:19:40.694771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:24:11.929 [2024-11-27 11:19:40.694782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.694993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.695001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.695009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.695017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.695025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:11.929 [2024-11-27 11:19:40.695033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:11.930 [2024-11-27 11:19:40.695631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:11.930 [2024-11-27 11:19:40.695640] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cf8b7018-7e12-468c-960c-3a6629bb4ab1 00:24:11.930 [2024-11-27 11:19:40.695648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:24:11.930 [2024-11-27 11:19:40.695655] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24768 00:24:11.930 [2024-11-27 11:19:40.695663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23808 00:24:11.930 [2024-11-27 11:19:40.695682] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0403 00:24:11.930 [2024-11-27 11:19:40.695696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:11.930 [2024-11-27 11:19:40.695705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:11.930 [2024-11-27 11:19:40.695713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:11.930 [2024-11-27 11:19:40.695720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:11.930 [2024-11-27 11:19:40.695726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:11.930 [2024-11-27 11:19:40.695734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.930 [2024-11-27 11:19:40.695745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:11.930 [2024-11-27 11:19:40.695754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:24:11.930 [2024-11-27 11:19:40.695775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.930 [2024-11-27 11:19:40.698183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.930 [2024-11-27 11:19:40.698231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:11.930 [2024-11-27 11:19:40.698242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.388 ms 00:24:11.930 [2024-11-27 11:19:40.698251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.930 [2024-11-27 11:19:40.698385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.930 [2024-11-27 11:19:40.698400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:11.930 [2024-11-27 11:19:40.698413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:24:11.931 [2024-11-27 11:19:40.698431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.705155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.705197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.931 [2024-11-27 11:19:40.705214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.705222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.705287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.705297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.931 [2024-11-27 11:19:40.705306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.705314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.705397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.705412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.931 [2024-11-27 11:19:40.705421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.705437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.705459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.705472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.931 [2024-11-27 11:19:40.705485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.705497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.718572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.718618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.931 [2024-11-27 11:19:40.718628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.718637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.728661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.728709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.931 [2024-11-27 11:19:40.728720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.728739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.728789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.728802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.931 [2024-11-27 11:19:40.728828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.728837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.728873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.728882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.931 [2024-11-27 11:19:40.728957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.728966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.729037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.729046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.931 [2024-11-27 11:19:40.729058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.729073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.729113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.729126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:11.931 [2024-11-27 11:19:40.729138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.729149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.729199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.729209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.931 [2024-11-27 11:19:40.729218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.729228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.729272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.931 [2024-11-27 11:19:40.729282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.931 [2024-11-27 11:19:40.729290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.931 [2024-11-27 11:19:40.729297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.931 [2024-11-27 11:19:40.729429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.498 ms, result 0 00:24:12.192 00:24:12.192 00:24:12.192 11:19:41 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:14.741 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:14.741 11:19:43 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:14.741 11:19:43 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:14.741 11:19:43 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:14.741 11:19:43 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:14.741 11:19:43 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:14.741 Process with pid 86322 is not found 00:24:14.741 Remove shared memory files 00:24:14.741 11:19:43 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86322 00:24:14.741 11:19:43 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86322 ']' 00:24:14.741 11:19:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86322 00:24:14.741 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86322) - No such process 00:24:14.742 11:19:43 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86322 is not found' 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:14.742 11:19:43 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:14.742 ************************************ 00:24:14.742 END TEST ftl_restore 00:24:14.742 ************************************ 00:24:14.742 00:24:14.742 real 5m12.667s 00:24:14.742 user 4m58.912s 00:24:14.742 sys 0m13.135s 00:24:14.742 11:19:43 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:14.742 11:19:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:14.742 11:19:43 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:14.742 11:19:43 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:14.742 11:19:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:14.742 11:19:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:14.742 ************************************ 00:24:14.742 START TEST ftl_dirty_shutdown 00:24:14.742 ************************************ 00:24:14.742 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:14.742 * Looking for test storage... 00:24:14.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:14.742 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:14.742 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:14.742 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.004 --rc genhtml_branch_coverage=1 00:24:15.004 --rc genhtml_function_coverage=1 00:24:15.004 --rc genhtml_legend=1 00:24:15.004 --rc geninfo_all_blocks=1 00:24:15.004 --rc geninfo_unexecuted_blocks=1 00:24:15.004 00:24:15.004 ' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.004 --rc genhtml_branch_coverage=1 00:24:15.004 --rc genhtml_function_coverage=1 00:24:15.004 --rc genhtml_legend=1 00:24:15.004 --rc geninfo_all_blocks=1 00:24:15.004 --rc geninfo_unexecuted_blocks=1 00:24:15.004 00:24:15.004 ' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.004 --rc genhtml_branch_coverage=1 00:24:15.004 --rc genhtml_function_coverage=1 00:24:15.004 --rc genhtml_legend=1 00:24:15.004 --rc geninfo_all_blocks=1 00:24:15.004 --rc geninfo_unexecuted_blocks=1 00:24:15.004 00:24:15.004 ' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.004 --rc genhtml_branch_coverage=1 00:24:15.004 --rc genhtml_function_coverage=1 00:24:15.004 --rc genhtml_legend=1 00:24:15.004 --rc geninfo_all_blocks=1 00:24:15.004 --rc geninfo_unexecuted_blocks=1 00:24:15.004 00:24:15.004 ' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89643 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89643 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89643 ']' 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.004 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.005 11:19:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:15.005 11:19:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:15.005 [2024-11-27 11:19:43.770084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:15.005 [2024-11-27 11:19:43.770248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89643 ] 00:24:15.266 [2024-11-27 11:19:43.925969] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.266 [2024-11-27 11:19:43.976344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:15.838 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:16.099 11:19:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:16.361 { 00:24:16.361 "name": "nvme0n1", 00:24:16.361 "aliases": [ 00:24:16.361 "b62bb202-ebf5-499f-bd21-fd5aff3263ae" 00:24:16.361 ], 00:24:16.361 "product_name": "NVMe disk", 00:24:16.361 "block_size": 4096, 00:24:16.361 "num_blocks": 1310720, 00:24:16.361 "uuid": "b62bb202-ebf5-499f-bd21-fd5aff3263ae", 00:24:16.361 "numa_id": -1, 00:24:16.361 "assigned_rate_limits": { 00:24:16.361 "rw_ios_per_sec": 0, 00:24:16.361 "rw_mbytes_per_sec": 0, 00:24:16.361 "r_mbytes_per_sec": 0, 00:24:16.361 "w_mbytes_per_sec": 0 00:24:16.361 }, 00:24:16.361 "claimed": true, 00:24:16.361 "claim_type": "read_many_write_one", 00:24:16.361 "zoned": false, 00:24:16.361 "supported_io_types": { 00:24:16.361 "read": true, 00:24:16.361 "write": true, 00:24:16.361 "unmap": true, 00:24:16.361 "flush": true, 00:24:16.361 "reset": true, 00:24:16.361 "nvme_admin": true, 00:24:16.361 "nvme_io": true, 00:24:16.361 "nvme_io_md": false, 00:24:16.361 "write_zeroes": true, 00:24:16.361 "zcopy": false, 00:24:16.361 "get_zone_info": false, 00:24:16.361 "zone_management": false, 00:24:16.361 "zone_append": false, 00:24:16.361 "compare": true, 00:24:16.361 "compare_and_write": false, 00:24:16.361 "abort": true, 00:24:16.361 "seek_hole": false, 00:24:16.361 "seek_data": false, 00:24:16.361 "copy": true, 00:24:16.361 "nvme_iov_md": false 00:24:16.361 }, 00:24:16.361 "driver_specific": { 00:24:16.361 "nvme": [ 00:24:16.361 { 00:24:16.361 "pci_address": "0000:00:11.0", 00:24:16.361 "trid": { 00:24:16.361 "trtype": "PCIe", 00:24:16.361 "traddr": "0000:00:11.0" 00:24:16.361 }, 00:24:16.361 "ctrlr_data": { 00:24:16.361 "cntlid": 0, 00:24:16.361 "vendor_id": "0x1b36", 00:24:16.361 "model_number": "QEMU NVMe Ctrl", 00:24:16.361 "serial_number": "12341", 00:24:16.361 "firmware_revision": "8.0.0", 00:24:16.361 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:16.361 "oacs": { 00:24:16.361 "security": 0, 00:24:16.361 "format": 1, 00:24:16.361 "firmware": 0, 00:24:16.361 "ns_manage": 1 00:24:16.361 }, 00:24:16.361 "multi_ctrlr": false, 00:24:16.361 "ana_reporting": false 00:24:16.361 }, 00:24:16.361 "vs": { 00:24:16.361 "nvme_version": "1.4" 00:24:16.361 }, 00:24:16.361 "ns_data": { 00:24:16.361 "id": 1, 00:24:16.361 "can_share": false 00:24:16.361 } 00:24:16.361 } 00:24:16.361 ], 00:24:16.361 "mp_policy": "active_passive" 00:24:16.361 } 00:24:16.361 } 00:24:16.361 ]' 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:16.361 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:16.622 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=30ea02a5-d779-4db0-ad2d-33361c80fa93 00:24:16.622 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:16.622 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30ea02a5-d779-4db0-ad2d-33361c80fa93 00:24:16.883 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:17.145 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a636d187-dc8f-4890-b92e-9cf690e6d302 00:24:17.145 11:19:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a636d187-dc8f-4890-b92e-9cf690e6d302 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:17.407 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:17.666 { 00:24:17.666 "name": "b604c692-19c1-4696-93a1-2115f67f2376", 00:24:17.666 "aliases": [ 00:24:17.666 "lvs/nvme0n1p0" 00:24:17.666 ], 00:24:17.666 "product_name": "Logical Volume", 00:24:17.666 "block_size": 4096, 00:24:17.666 "num_blocks": 26476544, 00:24:17.666 "uuid": "b604c692-19c1-4696-93a1-2115f67f2376", 00:24:17.666 "assigned_rate_limits": { 00:24:17.666 "rw_ios_per_sec": 0, 00:24:17.666 "rw_mbytes_per_sec": 0, 00:24:17.666 "r_mbytes_per_sec": 0, 00:24:17.666 "w_mbytes_per_sec": 0 00:24:17.666 }, 00:24:17.666 "claimed": false, 00:24:17.666 "zoned": false, 00:24:17.666 "supported_io_types": { 00:24:17.666 "read": true, 00:24:17.666 "write": true, 00:24:17.666 "unmap": true, 00:24:17.666 "flush": false, 00:24:17.666 "reset": true, 00:24:17.666 "nvme_admin": false, 00:24:17.666 "nvme_io": false, 00:24:17.666 "nvme_io_md": false, 00:24:17.666 "write_zeroes": true, 00:24:17.666 "zcopy": false, 00:24:17.666 "get_zone_info": false, 00:24:17.666 "zone_management": false, 00:24:17.666 "zone_append": false, 00:24:17.666 "compare": false, 00:24:17.666 "compare_and_write": false, 00:24:17.666 "abort": false, 00:24:17.666 "seek_hole": true, 00:24:17.666 "seek_data": true, 00:24:17.666 "copy": false, 00:24:17.666 "nvme_iov_md": false 00:24:17.666 }, 00:24:17.666 "driver_specific": { 00:24:17.666 "lvol": { 00:24:17.666 "lvol_store_uuid": "a636d187-dc8f-4890-b92e-9cf690e6d302", 00:24:17.666 "base_bdev": "nvme0n1", 00:24:17.666 "thin_provision": true, 00:24:17.666 "num_allocated_clusters": 0, 00:24:17.666 "snapshot": false, 00:24:17.666 "clone": false, 00:24:17.666 "esnap_clone": false 00:24:17.666 } 00:24:17.666 } 00:24:17.666 } 00:24:17.666 ]' 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:17.666 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b604c692-19c1-4696-93a1-2115f67f2376 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:17.925 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b604c692-19c1-4696-93a1-2115f67f2376 00:24:18.183 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:18.183 { 00:24:18.183 "name": "b604c692-19c1-4696-93a1-2115f67f2376", 00:24:18.183 "aliases": [ 00:24:18.183 "lvs/nvme0n1p0" 00:24:18.183 ], 00:24:18.183 "product_name": "Logical Volume", 00:24:18.183 "block_size": 4096, 00:24:18.183 "num_blocks": 26476544, 00:24:18.183 "uuid": "b604c692-19c1-4696-93a1-2115f67f2376", 00:24:18.183 "assigned_rate_limits": { 00:24:18.183 "rw_ios_per_sec": 0, 00:24:18.183 "rw_mbytes_per_sec": 0, 00:24:18.183 "r_mbytes_per_sec": 0, 00:24:18.183 "w_mbytes_per_sec": 0 00:24:18.183 }, 00:24:18.183 "claimed": false, 00:24:18.183 "zoned": false, 00:24:18.183 "supported_io_types": { 00:24:18.183 "read": true, 00:24:18.183 "write": true, 00:24:18.183 "unmap": true, 00:24:18.183 "flush": false, 00:24:18.183 "reset": true, 00:24:18.183 "nvme_admin": false, 00:24:18.183 "nvme_io": false, 00:24:18.183 "nvme_io_md": false, 00:24:18.183 "write_zeroes": true, 00:24:18.183 "zcopy": false, 00:24:18.183 "get_zone_info": false, 00:24:18.183 "zone_management": false, 00:24:18.183 "zone_append": false, 00:24:18.183 "compare": false, 00:24:18.183 "compare_and_write": false, 00:24:18.183 "abort": false, 00:24:18.183 "seek_hole": true, 00:24:18.183 "seek_data": true, 00:24:18.183 "copy": false, 00:24:18.183 "nvme_iov_md": false 00:24:18.183 }, 00:24:18.183 "driver_specific": { 00:24:18.183 "lvol": { 00:24:18.183 "lvol_store_uuid": "a636d187-dc8f-4890-b92e-9cf690e6d302", 00:24:18.183 "base_bdev": "nvme0n1", 00:24:18.183 "thin_provision": true, 00:24:18.183 "num_allocated_clusters": 0, 00:24:18.183 "snapshot": false, 00:24:18.183 "clone": false, 00:24:18.183 "esnap_clone": false 00:24:18.183 } 00:24:18.183 } 00:24:18.183 } 00:24:18.183 ]' 00:24:18.183 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:18.183 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:18.183 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:18.183 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:18.183 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:18.184 11:19:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:18.184 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:18.184 11:19:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b604c692-19c1-4696-93a1-2115f67f2376 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b604c692-19c1-4696-93a1-2115f67f2376 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b604c692-19c1-4696-93a1-2115f67f2376 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:18.442 { 00:24:18.442 "name": "b604c692-19c1-4696-93a1-2115f67f2376", 00:24:18.442 "aliases": [ 00:24:18.442 "lvs/nvme0n1p0" 00:24:18.442 ], 00:24:18.442 "product_name": "Logical Volume", 00:24:18.442 "block_size": 4096, 00:24:18.442 "num_blocks": 26476544, 00:24:18.442 "uuid": "b604c692-19c1-4696-93a1-2115f67f2376", 00:24:18.442 "assigned_rate_limits": { 00:24:18.442 "rw_ios_per_sec": 0, 00:24:18.442 "rw_mbytes_per_sec": 0, 00:24:18.442 "r_mbytes_per_sec": 0, 00:24:18.442 "w_mbytes_per_sec": 0 00:24:18.442 }, 00:24:18.442 "claimed": false, 00:24:18.442 "zoned": false, 00:24:18.442 "supported_io_types": { 00:24:18.442 "read": true, 00:24:18.442 "write": true, 00:24:18.442 "unmap": true, 00:24:18.442 "flush": false, 00:24:18.442 "reset": true, 00:24:18.442 "nvme_admin": false, 00:24:18.442 "nvme_io": false, 00:24:18.442 "nvme_io_md": false, 00:24:18.442 "write_zeroes": true, 00:24:18.442 "zcopy": false, 00:24:18.442 "get_zone_info": false, 00:24:18.442 "zone_management": false, 00:24:18.442 "zone_append": false, 00:24:18.442 "compare": false, 00:24:18.442 "compare_and_write": false, 00:24:18.442 "abort": false, 00:24:18.442 "seek_hole": true, 00:24:18.442 "seek_data": true, 00:24:18.442 "copy": false, 00:24:18.442 "nvme_iov_md": false 00:24:18.442 }, 00:24:18.442 "driver_specific": { 00:24:18.442 "lvol": { 00:24:18.442 "lvol_store_uuid": "a636d187-dc8f-4890-b92e-9cf690e6d302", 00:24:18.442 "base_bdev": "nvme0n1", 00:24:18.442 "thin_provision": true, 00:24:18.442 "num_allocated_clusters": 0, 00:24:18.442 "snapshot": false, 00:24:18.442 "clone": false, 00:24:18.442 "esnap_clone": false 00:24:18.442 } 00:24:18.442 } 00:24:18.442 } 00:24:18.442 ]' 00:24:18.442 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b604c692-19c1-4696-93a1-2115f67f2376 --l2p_dram_limit 10' 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:18.702 11:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b604c692-19c1-4696-93a1-2115f67f2376 --l2p_dram_limit 10 -c nvc0n1p0 00:24:18.702 [2024-11-27 11:19:47.558557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.558597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:18.702 [2024-11-27 11:19:47.558608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:18.702 [2024-11-27 11:19:47.558615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.558652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.558661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:18.702 [2024-11-27 11:19:47.558667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:18.702 [2024-11-27 11:19:47.558678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.558697] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:18.702 [2024-11-27 11:19:47.558979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:18.702 [2024-11-27 11:19:47.558995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.559003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:18.702 [2024-11-27 11:19:47.559011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:24:18.702 [2024-11-27 11:19:47.559020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.559164] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0b653020-fbad-440a-9449-43326b5105a8 00:24:18.702 [2024-11-27 11:19:47.560095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.560118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:18.702 [2024-11-27 11:19:47.560127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:18.702 [2024-11-27 11:19:47.560133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.564680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.564710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:18.702 [2024-11-27 11:19:47.564719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.512 ms 00:24:18.702 [2024-11-27 11:19:47.564725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.564782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.564789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:18.702 [2024-11-27 11:19:47.564797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:18.702 [2024-11-27 11:19:47.564804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.564870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.564878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:18.702 [2024-11-27 11:19:47.564886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:18.702 [2024-11-27 11:19:47.564902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.564920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.702 [2024-11-27 11:19:47.566159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.566185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:18.702 [2024-11-27 11:19:47.566194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:24:18.702 [2024-11-27 11:19:47.566202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.566225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.702 [2024-11-27 11:19:47.566233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:18.702 [2024-11-27 11:19:47.566239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:18.702 [2024-11-27 11:19:47.566248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.702 [2024-11-27 11:19:47.566260] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:18.702 [2024-11-27 11:19:47.566370] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:18.702 [2024-11-27 11:19:47.566378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:18.702 [2024-11-27 11:19:47.566388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:18.702 [2024-11-27 11:19:47.566398] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:18.702 [2024-11-27 11:19:47.566406] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:18.702 [2024-11-27 11:19:47.566412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:18.702 [2024-11-27 11:19:47.566423] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:18.703 [2024-11-27 11:19:47.566428] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:18.703 [2024-11-27 11:19:47.566435] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:18.703 [2024-11-27 11:19:47.566442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.703 [2024-11-27 11:19:47.566449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:18.703 [2024-11-27 11:19:47.566454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:24:18.703 [2024-11-27 11:19:47.566462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.703 [2024-11-27 11:19:47.566525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.703 [2024-11-27 11:19:47.566534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:18.703 [2024-11-27 11:19:47.566540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:18.703 [2024-11-27 11:19:47.566546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.703 [2024-11-27 11:19:47.566617] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:18.703 [2024-11-27 11:19:47.566635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:18.703 [2024-11-27 11:19:47.566641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:18.703 [2024-11-27 11:19:47.566664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:18.703 [2024-11-27 11:19:47.566681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.703 [2024-11-27 11:19:47.566692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:18.703 [2024-11-27 11:19:47.566700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:18.703 [2024-11-27 11:19:47.566705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.703 [2024-11-27 11:19:47.566713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:18.703 [2024-11-27 11:19:47.566718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:18.703 [2024-11-27 11:19:47.566725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:18.703 [2024-11-27 11:19:47.566739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:18.703 [2024-11-27 11:19:47.566754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:18.703 [2024-11-27 11:19:47.566772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:18.703 [2024-11-27 11:19:47.566788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:18.703 [2024-11-27 11:19:47.566807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:18.703 [2024-11-27 11:19:47.566825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.703 [2024-11-27 11:19:47.566838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:18.703 [2024-11-27 11:19:47.566845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:18.703 [2024-11-27 11:19:47.566851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.703 [2024-11-27 11:19:47.566859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:18.703 [2024-11-27 11:19:47.566865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:18.703 [2024-11-27 11:19:47.566872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:18.703 [2024-11-27 11:19:47.566885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:18.703 [2024-11-27 11:19:47.566900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:18.703 [2024-11-27 11:19:47.566913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:18.703 [2024-11-27 11:19:47.566922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.703 [2024-11-27 11:19:47.566935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:18.703 [2024-11-27 11:19:47.566941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:18.703 [2024-11-27 11:19:47.566949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:18.703 [2024-11-27 11:19:47.566955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:18.703 [2024-11-27 11:19:47.566962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:18.703 [2024-11-27 11:19:47.566968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:18.703 [2024-11-27 11:19:47.566977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:18.703 [2024-11-27 11:19:47.566984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.566993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:18.703 [2024-11-27 11:19:47.566999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:18.703 [2024-11-27 11:19:47.567007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:18.703 [2024-11-27 11:19:47.567013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:18.703 [2024-11-27 11:19:47.567021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:18.703 [2024-11-27 11:19:47.567027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:18.703 [2024-11-27 11:19:47.567036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:18.703 [2024-11-27 11:19:47.567042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:18.703 [2024-11-27 11:19:47.567050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:18.703 [2024-11-27 11:19:47.567056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.567064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.567070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.567077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.567084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:18.703 [2024-11-27 11:19:47.567091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:18.703 [2024-11-27 11:19:47.567100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.567108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:18.703 [2024-11-27 11:19:47.567114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:18.703 [2024-11-27 11:19:47.567122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:18.703 [2024-11-27 11:19:47.567128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:18.703 [2024-11-27 11:19:47.567136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.703 [2024-11-27 11:19:47.567142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:18.703 [2024-11-27 11:19:47.567151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:18.703 [2024-11-27 11:19:47.567157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.703 [2024-11-27 11:19:47.567186] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:18.703 [2024-11-27 11:19:47.567194] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:22.906 [2024-11-27 11:19:51.082680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.082781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:22.906 [2024-11-27 11:19:51.082804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3515.473 ms 00:24:22.906 [2024-11-27 11:19:51.082813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.096694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.096751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.906 [2024-11-27 11:19:51.096767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.736 ms 00:24:22.906 [2024-11-27 11:19:51.096778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.096908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.096920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:22.906 [2024-11-27 11:19:51.096942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:22.906 [2024-11-27 11:19:51.096951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.108650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.108873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.906 [2024-11-27 11:19:51.108921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.643 ms 00:24:22.906 [2024-11-27 11:19:51.108930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.108968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.108981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.906 [2024-11-27 11:19:51.108992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.906 [2024-11-27 11:19:51.108999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.109544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.109566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.906 [2024-11-27 11:19:51.109580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:24:22.906 [2024-11-27 11:19:51.109589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.109712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.109723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.906 [2024-11-27 11:19:51.109738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:22.906 [2024-11-27 11:19:51.109746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.133633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.133693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.906 [2024-11-27 11:19:51.133709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.860 ms 00:24:22.906 [2024-11-27 11:19:51.133718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.143679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:22.906 [2024-11-27 11:19:51.147569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.147687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.906 [2024-11-27 11:19:51.147699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.749 ms 00:24:22.906 [2024-11-27 11:19:51.147710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.225762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.225836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:22.906 [2024-11-27 11:19:51.225850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.018 ms 00:24:22.906 [2024-11-27 11:19:51.225865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.226112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.226128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.906 [2024-11-27 11:19:51.226138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:24:22.906 [2024-11-27 11:19:51.226149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.231602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.231657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:22.906 [2024-11-27 11:19:51.231669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.432 ms 00:24:22.906 [2024-11-27 11:19:51.231680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.236662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.236713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:22.906 [2024-11-27 11:19:51.236724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:24:22.906 [2024-11-27 11:19:51.236734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.237147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.237161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:22.906 [2024-11-27 11:19:51.237172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:24:22.906 [2024-11-27 11:19:51.237190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.277808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.277872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:22.906 [2024-11-27 11:19:51.277885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.579 ms 00:24:22.906 [2024-11-27 11:19:51.277925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.284519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.284736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:22.906 [2024-11-27 11:19:51.284756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.531 ms 00:24:22.906 [2024-11-27 11:19:51.284767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.290444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.290497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:22.906 [2024-11-27 11:19:51.290507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.606 ms 00:24:22.906 [2024-11-27 11:19:51.290516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.296691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.296745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.906 [2024-11-27 11:19:51.296756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:24:22.906 [2024-11-27 11:19:51.296769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.296833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.296847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.906 [2024-11-27 11:19:51.296856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:22.906 [2024-11-27 11:19:51.296867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.296961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.906 [2024-11-27 11:19:51.296974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.906 [2024-11-27 11:19:51.296983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:22.906 [2024-11-27 11:19:51.297004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.906 [2024-11-27 11:19:51.298122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3739.052 ms, result 0 00:24:22.906 { 00:24:22.906 "name": "ftl0", 00:24:22.906 "uuid": "0b653020-fbad-440a-9449-43326b5105a8" 00:24:22.906 } 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:22.906 /dev/nbd0 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:22.906 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:22.906 1+0 records in 00:24:22.906 1+0 records out 00:24:22.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480529 s, 8.5 MB/s 00:24:22.907 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:23.167 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:24:23.167 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:23.167 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:23.167 11:19:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:24:23.167 11:19:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:23.167 [2024-11-27 11:19:51.865014] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:23.167 [2024-11-27 11:19:51.865174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89785 ] 00:24:23.167 [2024-11-27 11:19:52.018625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.426 [2024-11-27 11:19:52.068973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.371  [2024-11-27T11:19:54.193Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-27T11:19:55.570Z] Copying: 418/1024 [MB] (229 MBps) [2024-11-27T11:19:56.504Z] Copying: 681/1024 [MB] (262 MBps) [2024-11-27T11:19:56.504Z] Copying: 937/1024 [MB] (255 MBps) [2024-11-27T11:19:56.798Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:24:27.915 00:24:27.915 11:19:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:30.449 11:19:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:30.449 [2024-11-27 11:19:58.839685] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:30.449 [2024-11-27 11:19:58.839803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89862 ] 00:24:30.449 [2024-11-27 11:19:58.986058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.449 [2024-11-27 11:19:59.017218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.392  [2024-11-27T11:20:01.214Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T11:20:02.147Z] Copying: 34/1024 [MB] (18 MBps) [2024-11-27T11:20:03.082Z] Copying: 52/1024 [MB] (18 MBps) [2024-11-27T11:20:04.459Z] Copying: 71/1024 [MB] (19 MBps) [2024-11-27T11:20:05.404Z] Copying: 100/1024 [MB] (28 MBps) [2024-11-27T11:20:06.345Z] Copying: 114/1024 [MB] (14 MBps) [2024-11-27T11:20:07.287Z] Copying: 134/1024 [MB] (19 MBps) [2024-11-27T11:20:08.228Z] Copying: 153/1024 [MB] (19 MBps) [2024-11-27T11:20:09.169Z] Copying: 171/1024 [MB] (17 MBps) [2024-11-27T11:20:10.107Z] Copying: 190/1024 [MB] (19 MBps) [2024-11-27T11:20:11.489Z] Copying: 208/1024 [MB] (18 MBps) [2024-11-27T11:20:12.502Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-27T11:20:13.089Z] Copying: 245/1024 [MB] (19 MBps) [2024-11-27T11:20:14.476Z] Copying: 262/1024 [MB] (16 MBps) [2024-11-27T11:20:15.420Z] Copying: 280/1024 [MB] (18 MBps) [2024-11-27T11:20:16.360Z] Copying: 295/1024 [MB] (14 MBps) [2024-11-27T11:20:17.298Z] Copying: 312/1024 [MB] (17 MBps) [2024-11-27T11:20:18.241Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-27T11:20:19.185Z] Copying: 346/1024 [MB] (16 MBps) [2024-11-27T11:20:20.132Z] Copying: 359/1024 [MB] (13 MBps) [2024-11-27T11:20:21.075Z] Copying: 372/1024 [MB] (12 MBps) [2024-11-27T11:20:22.462Z] Copying: 386/1024 [MB] (13 MBps) [2024-11-27T11:20:23.406Z] Copying: 399/1024 [MB] (13 MBps) [2024-11-27T11:20:24.350Z] Copying: 412/1024 [MB] (12 MBps) [2024-11-27T11:20:25.292Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-27T11:20:26.234Z] Copying: 438/1024 [MB] (14 MBps) [2024-11-27T11:20:27.175Z] Copying: 449/1024 [MB] (10 MBps) [2024-11-27T11:20:28.116Z] Copying: 462/1024 [MB] (13 MBps) [2024-11-27T11:20:29.501Z] Copying: 481/1024 [MB] (18 MBps) [2024-11-27T11:20:30.073Z] Copying: 498/1024 [MB] (17 MBps) [2024-11-27T11:20:31.462Z] Copying: 509/1024 [MB] (11 MBps) [2024-11-27T11:20:32.407Z] Copying: 521/1024 [MB] (11 MBps) [2024-11-27T11:20:33.350Z] Copying: 535/1024 [MB] (14 MBps) [2024-11-27T11:20:34.293Z] Copying: 550/1024 [MB] (14 MBps) [2024-11-27T11:20:35.235Z] Copying: 565/1024 [MB] (15 MBps) [2024-11-27T11:20:36.178Z] Copying: 582/1024 [MB] (17 MBps) [2024-11-27T11:20:37.123Z] Copying: 602/1024 [MB] (19 MBps) [2024-11-27T11:20:38.067Z] Copying: 619/1024 [MB] (17 MBps) [2024-11-27T11:20:39.452Z] Copying: 631/1024 [MB] (12 MBps) [2024-11-27T11:20:40.394Z] Copying: 648/1024 [MB] (16 MBps) [2024-11-27T11:20:41.334Z] Copying: 670/1024 [MB] (22 MBps) [2024-11-27T11:20:42.278Z] Copying: 693/1024 [MB] (22 MBps) [2024-11-27T11:20:43.222Z] Copying: 711/1024 [MB] (18 MBps) [2024-11-27T11:20:44.163Z] Copying: 732/1024 [MB] (21 MBps) [2024-11-27T11:20:45.199Z] Copying: 750/1024 [MB] (17 MBps) [2024-11-27T11:20:46.138Z] Copying: 769/1024 [MB] (18 MBps) [2024-11-27T11:20:47.081Z] Copying: 790/1024 [MB] (21 MBps) [2024-11-27T11:20:48.470Z] Copying: 809/1024 [MB] (19 MBps) [2024-11-27T11:20:49.411Z] Copying: 826/1024 [MB] (17 MBps) [2024-11-27T11:20:50.352Z] Copying: 844/1024 [MB] (18 MBps) [2024-11-27T11:20:51.292Z] Copying: 862/1024 [MB] (17 MBps) [2024-11-27T11:20:52.231Z] Copying: 879/1024 [MB] (17 MBps) [2024-11-27T11:20:53.224Z] Copying: 895/1024 [MB] (16 MBps) [2024-11-27T11:20:54.167Z] Copying: 913/1024 [MB] (18 MBps) [2024-11-27T11:20:55.110Z] Copying: 928/1024 [MB] (14 MBps) [2024-11-27T11:20:56.500Z] Copying: 950/1024 [MB] (22 MBps) [2024-11-27T11:20:57.071Z] Copying: 970/1024 [MB] (19 MBps) [2024-11-27T11:20:58.456Z] Copying: 987/1024 [MB] (17 MBps) [2024-11-27T11:20:59.416Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-27T11:20:59.416Z] Copying: 1022/1024 [MB] (18 MBps) [2024-11-27T11:20:59.416Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:25:30.533 00:25:30.533 11:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:30.533 11:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:30.792 11:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:31.055 [2024-11-27 11:20:59.741188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.741230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:31.055 [2024-11-27 11:20:59.741245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:31.055 [2024-11-27 11:20:59.741257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.741281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:31.055 [2024-11-27 11:20:59.741701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.741729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:31.055 [2024-11-27 11:20:59.741738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:25:31.055 [2024-11-27 11:20:59.741749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.744307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.744341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:31.055 [2024-11-27 11:20:59.744351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:25:31.055 [2024-11-27 11:20:59.744359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.760930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.760967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:31.055 [2024-11-27 11:20:59.760977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.554 ms 00:25:31.055 [2024-11-27 11:20:59.760987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.767133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.767171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:31.055 [2024-11-27 11:20:59.767180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:25:31.055 [2024-11-27 11:20:59.767189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.769527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.769561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:31.055 [2024-11-27 11:20:59.769571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:25:31.055 [2024-11-27 11:20:59.769581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.773933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.773968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:31.055 [2024-11-27 11:20:59.773979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.322 ms 00:25:31.055 [2024-11-27 11:20:59.773990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.774106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.774117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:31.055 [2024-11-27 11:20:59.774125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:31.055 [2024-11-27 11:20:59.774137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.776143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.776190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:31.055 [2024-11-27 11:20:59.776201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:25:31.055 [2024-11-27 11:20:59.776210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.777834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.777874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:31.055 [2024-11-27 11:20:59.777882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:25:31.055 [2024-11-27 11:20:59.777901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.779064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.779095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:31.055 [2024-11-27 11:20:59.779103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:25:31.055 [2024-11-27 11:20:59.779110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.780178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.055 [2024-11-27 11:20:59.780209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:31.055 [2024-11-27 11:20:59.780217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:25:31.055 [2024-11-27 11:20:59.780225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.055 [2024-11-27 11:20:59.780252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:31.055 [2024-11-27 11:20:59.780267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:31.055 [2024-11-27 11:20:59.780401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.780998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:31.056 [2024-11-27 11:20:59.781121] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:31.056 [2024-11-27 11:20:59.781129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0b653020-fbad-440a-9449-43326b5105a8 00:25:31.056 [2024-11-27 11:20:59.781140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:31.056 [2024-11-27 11:20:59.781147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:31.056 [2024-11-27 11:20:59.781156] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:31.056 [2024-11-27 11:20:59.781163] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:31.056 [2024-11-27 11:20:59.781171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:31.057 [2024-11-27 11:20:59.781179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:31.057 [2024-11-27 11:20:59.781187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:31.057 [2024-11-27 11:20:59.781194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:31.057 [2024-11-27 11:20:59.781201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:31.057 [2024-11-27 11:20:59.781208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.057 [2024-11-27 11:20:59.781217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:31.057 [2024-11-27 11:20:59.781225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:25:31.057 [2024-11-27 11:20:59.781233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.782622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.057 [2024-11-27 11:20:59.782652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:31.057 [2024-11-27 11:20:59.782661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:25:31.057 [2024-11-27 11:20:59.782670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.782743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.057 [2024-11-27 11:20:59.782753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:31.057 [2024-11-27 11:20:59.782761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:31.057 [2024-11-27 11:20:59.782770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.787771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.787805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.057 [2024-11-27 11:20:59.787814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.787823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.787871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.787881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.057 [2024-11-27 11:20:59.787900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.787910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.787977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.787990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.057 [2024-11-27 11:20:59.787998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.788007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.788023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.788031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.057 [2024-11-27 11:20:59.788038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.788047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.796423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.796463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.057 [2024-11-27 11:20:59.796473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.796482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.803796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.803835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.057 [2024-11-27 11:20:59.803845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.803859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.803914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.803929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.057 [2024-11-27 11:20:59.803937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.803946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.803994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.804005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.057 [2024-11-27 11:20:59.804013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.804021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.804083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.804096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.057 [2024-11-27 11:20:59.804104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.804113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.804141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.804152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:31.057 [2024-11-27 11:20:59.804159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.804168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.804203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.804217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.057 [2024-11-27 11:20:59.804224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.804233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.804276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.057 [2024-11-27 11:20:59.804301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.057 [2024-11-27 11:20:59.804309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.057 [2024-11-27 11:20:59.804318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.057 [2024-11-27 11:20:59.804441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.222 ms, result 0 00:25:31.057 true 00:25:31.057 11:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89643 00:25:31.057 11:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89643 00:25:31.057 11:20:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:31.057 [2024-11-27 11:20:59.891069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:31.057 [2024-11-27 11:20:59.891180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90491 ] 00:25:31.319 [2024-11-27 11:21:00.039294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.319 [2024-11-27 11:21:00.078280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.707  [2024-11-27T11:21:02.161Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-27T11:21:03.537Z] Copying: 398/1024 [MB] (209 MBps) [2024-11-27T11:21:04.470Z] Copying: 659/1024 [MB] (261 MBps) [2024-11-27T11:21:04.730Z] Copying: 915/1024 [MB] (256 MBps) [2024-11-27T11:21:04.730Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:25:35.847 00:25:35.847 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89643 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:35.847 11:21:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:36.104 [2024-11-27 11:21:04.788778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:36.104 [2024-11-27 11:21:04.788922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90548 ] 00:25:36.105 [2024-11-27 11:21:04.938608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.105 [2024-11-27 11:21:04.977612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.362 [2024-11-27 11:21:05.061802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.362 [2024-11-27 11:21:05.061851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.362 [2024-11-27 11:21:05.123867] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:36.362 [2024-11-27 11:21:05.124066] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:36.362 [2024-11-27 11:21:05.124205] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:36.624 [2024-11-27 11:21:05.296966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.624 [2024-11-27 11:21:05.297005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.625 [2024-11-27 11:21:05.297018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.625 [2024-11-27 11:21:05.297026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.297073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.297086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.625 [2024-11-27 11:21:05.297094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:36.625 [2024-11-27 11:21:05.297102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.297118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.625 [2024-11-27 11:21:05.297573] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.625 [2024-11-27 11:21:05.297610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.297623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.625 [2024-11-27 11:21:05.297632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:25:36.625 [2024-11-27 11:21:05.297642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.298758] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.625 [2024-11-27 11:21:05.301337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.301370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.625 [2024-11-27 11:21:05.301380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:25:36.625 [2024-11-27 11:21:05.301387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.301440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.301449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.625 [2024-11-27 11:21:05.301457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:36.625 [2024-11-27 11:21:05.301464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.306445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.306473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.625 [2024-11-27 11:21:05.306482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.927 ms 00:25:36.625 [2024-11-27 11:21:05.306492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.306571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.306580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.625 [2024-11-27 11:21:05.306588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:36.625 [2024-11-27 11:21:05.306595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.306635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.306647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.625 [2024-11-27 11:21:05.306657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:36.625 [2024-11-27 11:21:05.306664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.306685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.625 [2024-11-27 11:21:05.308139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.308161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.625 [2024-11-27 11:21:05.308169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:25:36.625 [2024-11-27 11:21:05.308176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.308207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.308214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.625 [2024-11-27 11:21:05.308222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:36.625 [2024-11-27 11:21:05.308230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.308248] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.625 [2024-11-27 11:21:05.308265] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:36.625 [2024-11-27 11:21:05.308298] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.625 [2024-11-27 11:21:05.308316] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:36.625 [2024-11-27 11:21:05.308419] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.625 [2024-11-27 11:21:05.308429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.625 [2024-11-27 11:21:05.308439] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.625 [2024-11-27 11:21:05.308449] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308470] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.625 [2024-11-27 11:21:05.308480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.625 [2024-11-27 11:21:05.308487] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.625 [2024-11-27 11:21:05.308494] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.625 [2024-11-27 11:21:05.308501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.308511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.625 [2024-11-27 11:21:05.308518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:25:36.625 [2024-11-27 11:21:05.308526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.308607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-11-27 11:21:05.308615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.625 [2024-11-27 11:21:05.308625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:36.625 [2024-11-27 11:21:05.308636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-11-27 11:21:05.308730] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.625 [2024-11-27 11:21:05.308739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.625 [2024-11-27 11:21:05.308750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.625 [2024-11-27 11:21:05.308775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.625 [2024-11-27 11:21:05.308797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.625 [2024-11-27 11:21:05.308812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.625 [2024-11-27 11:21:05.308819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.625 [2024-11-27 11:21:05.308826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.625 [2024-11-27 11:21:05.308854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.625 [2024-11-27 11:21:05.308866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.625 [2024-11-27 11:21:05.308875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.625 [2024-11-27 11:21:05.308907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.625 [2024-11-27 11:21:05.308932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.625 [2024-11-27 11:21:05.308958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.625 [2024-11-27 11:21:05.308980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.625 [2024-11-27 11:21:05.308988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.625 [2024-11-27 11:21:05.308995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.625 [2024-11-27 11:21:05.309003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.625 [2024-11-27 11:21:05.309018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.625 [2024-11-27 11:21:05.309026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.625 [2024-11-27 11:21:05.309033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.625 [2024-11-27 11:21:05.309041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.625 [2024-11-27 11:21:05.309048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.625 [2024-11-27 11:21:05.309056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.625 [2024-11-27 11:21:05.309063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.625 [2024-11-27 11:21:05.309072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.625 [2024-11-27 11:21:05.309079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.625 [2024-11-27 11:21:05.309087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.309094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.625 [2024-11-27 11:21:05.309101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.625 [2024-11-27 11:21:05.309110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.309117] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.625 [2024-11-27 11:21:05.309126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.625 [2024-11-27 11:21:05.309134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.625 [2024-11-27 11:21:05.309144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.625 [2024-11-27 11:21:05.309154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.625 [2024-11-27 11:21:05.309162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.625 [2024-11-27 11:21:05.309170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.625 [2024-11-27 11:21:05.309178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.625 [2024-11-27 11:21:05.309185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.625 [2024-11-27 11:21:05.309192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.625 [2024-11-27 11:21:05.309200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.625 [2024-11-27 11:21:05.309209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.625 [2024-11-27 11:21:05.309228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.625 [2024-11-27 11:21:05.309234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.625 [2024-11-27 11:21:05.309241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.625 [2024-11-27 11:21:05.309248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.625 [2024-11-27 11:21:05.309255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.625 [2024-11-27 11:21:05.309262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.625 [2024-11-27 11:21:05.309271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.625 [2024-11-27 11:21:05.309277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.625 [2024-11-27 11:21:05.309284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.625 [2024-11-27 11:21:05.309318] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.625 [2024-11-27 11:21:05.309326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.625 [2024-11-27 11:21:05.309341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.625 [2024-11-27 11:21:05.309348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.625 [2024-11-27 11:21:05.309355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.625 [2024-11-27 11:21:05.309362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.309371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.626 [2024-11-27 11:21:05.309378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:25:36.626 [2024-11-27 11:21:05.309387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.327506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.330920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.626 [2024-11-27 11:21:05.330945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.077 ms 00:25:36.626 [2024-11-27 11:21:05.330953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.331035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.331043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.626 [2024-11-27 11:21:05.331054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:36.626 [2024-11-27 11:21:05.331061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.339292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.339325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.626 [2024-11-27 11:21:05.339335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.179 ms 00:25:36.626 [2024-11-27 11:21:05.339342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.339376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.339387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.626 [2024-11-27 11:21:05.339395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:36.626 [2024-11-27 11:21:05.339402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.339736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.339760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.626 [2024-11-27 11:21:05.339769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:25:36.626 [2024-11-27 11:21:05.339780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.339913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.339924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.626 [2024-11-27 11:21:05.339935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:25:36.626 [2024-11-27 11:21:05.339944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.344806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.344844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.626 [2024-11-27 11:21:05.344860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.840 ms 00:25:36.626 [2024-11-27 11:21:05.344868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.348921] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:36.626 [2024-11-27 11:21:05.348952] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.626 [2024-11-27 11:21:05.348963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.348971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.626 [2024-11-27 11:21:05.348982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.995 ms 00:25:36.626 [2024-11-27 11:21:05.348989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.363659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.363697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.626 [2024-11-27 11:21:05.363708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.634 ms 00:25:36.626 [2024-11-27 11:21:05.363715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.365871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.365915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.626 [2024-11-27 11:21:05.365924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:25:36.626 [2024-11-27 11:21:05.365932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.367754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.367784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.626 [2024-11-27 11:21:05.367793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:25:36.626 [2024-11-27 11:21:05.367799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.368145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.368168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.626 [2024-11-27 11:21:05.368179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:25:36.626 [2024-11-27 11:21:05.368188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.385394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.385443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.626 [2024-11-27 11:21:05.385455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.184 ms 00:25:36.626 [2024-11-27 11:21:05.385463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.392971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.626 [2024-11-27 11:21:05.395328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.395359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.626 [2024-11-27 11:21:05.395370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.825 ms 00:25:36.626 [2024-11-27 11:21:05.395379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.395431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.395441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.626 [2024-11-27 11:21:05.395451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:36.626 [2024-11-27 11:21:05.395459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.395555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.395566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.626 [2024-11-27 11:21:05.395574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:36.626 [2024-11-27 11:21:05.395582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.395603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.395612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.626 [2024-11-27 11:21:05.395625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.626 [2024-11-27 11:21:05.395632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.395660] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.626 [2024-11-27 11:21:05.395672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.395679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.626 [2024-11-27 11:21:05.395687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:36.626 [2024-11-27 11:21:05.395694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.399831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.399869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.626 [2024-11-27 11:21:05.399880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.118 ms 00:25:36.626 [2024-11-27 11:21:05.399899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.399969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.626 [2024-11-27 11:21:05.399981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.626 [2024-11-27 11:21:05.399990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:36.626 [2024-11-27 11:21:05.399996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.626 [2024-11-27 11:21:05.400940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.538 ms, result 0 00:25:37.573  [2024-11-27T11:21:07.841Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-27T11:21:08.415Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-27T11:21:09.801Z] Copying: 43/1024 [MB] (14 MBps) [2024-11-27T11:21:10.745Z] Copying: 57/1024 [MB] (13 MBps) [2024-11-27T11:21:11.691Z] Copying: 73/1024 [MB] (15 MBps) [2024-11-27T11:21:12.634Z] Copying: 89/1024 [MB] (16 MBps) [2024-11-27T11:21:13.579Z] Copying: 108/1024 [MB] (19 MBps) [2024-11-27T11:21:14.523Z] Copying: 126/1024 [MB] (18 MBps) [2024-11-27T11:21:15.471Z] Copying: 140/1024 [MB] (13 MBps) [2024-11-27T11:21:16.523Z] Copying: 157/1024 [MB] (17 MBps) [2024-11-27T11:21:17.467Z] Copying: 172/1024 [MB] (14 MBps) [2024-11-27T11:21:18.854Z] Copying: 185/1024 [MB] (13 MBps) [2024-11-27T11:21:19.428Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-27T11:21:20.816Z] Copying: 213/1024 [MB] (15 MBps) [2024-11-27T11:21:21.759Z] Copying: 225/1024 [MB] (12 MBps) [2024-11-27T11:21:22.700Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-27T11:21:23.644Z] Copying: 249/1024 [MB] (13 MBps) [2024-11-27T11:21:24.588Z] Copying: 266/1024 [MB] (16 MBps) [2024-11-27T11:21:25.533Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-27T11:21:26.477Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-27T11:21:27.420Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-27T11:21:28.804Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-27T11:21:29.745Z] Copying: 324688/1048576 [kB] (10132 kBps) [2024-11-27T11:21:30.691Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-27T11:21:31.632Z] Copying: 338/1024 [MB] (11 MBps) [2024-11-27T11:21:32.577Z] Copying: 355/1024 [MB] (16 MBps) [2024-11-27T11:21:33.514Z] Copying: 368/1024 [MB] (12 MBps) [2024-11-27T11:21:34.455Z] Copying: 405/1024 [MB] (37 MBps) [2024-11-27T11:21:35.842Z] Copying: 432/1024 [MB] (27 MBps) [2024-11-27T11:21:36.416Z] Copying: 454/1024 [MB] (21 MBps) [2024-11-27T11:21:37.799Z] Copying: 473/1024 [MB] (19 MBps) [2024-11-27T11:21:38.738Z] Copying: 492/1024 [MB] (19 MBps) [2024-11-27T11:21:39.681Z] Copying: 512/1024 [MB] (19 MBps) [2024-11-27T11:21:40.624Z] Copying: 533/1024 [MB] (21 MBps) [2024-11-27T11:21:41.568Z] Copying: 546/1024 [MB] (13 MBps) [2024-11-27T11:21:42.512Z] Copying: 559/1024 [MB] (12 MBps) [2024-11-27T11:21:43.457Z] Copying: 576/1024 [MB] (16 MBps) [2024-11-27T11:21:44.847Z] Copying: 592/1024 [MB] (16 MBps) [2024-11-27T11:21:45.418Z] Copying: 603/1024 [MB] (11 MBps) [2024-11-27T11:21:46.806Z] Copying: 621/1024 [MB] (17 MBps) [2024-11-27T11:21:47.750Z] Copying: 636/1024 [MB] (14 MBps) [2024-11-27T11:21:48.767Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-27T11:21:49.710Z] Copying: 673256/1048576 [kB] (10072 kBps) [2024-11-27T11:21:50.653Z] Copying: 683320/1048576 [kB] (10064 kBps) [2024-11-27T11:21:51.596Z] Copying: 693448/1048576 [kB] (10128 kBps) [2024-11-27T11:21:52.529Z] Copying: 690/1024 [MB] (13 MBps) [2024-11-27T11:21:53.467Z] Copying: 745/1024 [MB] (54 MBps) [2024-11-27T11:21:54.850Z] Copying: 785/1024 [MB] (40 MBps) [2024-11-27T11:21:55.424Z] Copying: 798/1024 [MB] (13 MBps) [2024-11-27T11:21:56.813Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-27T11:21:57.758Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-27T11:21:58.705Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-27T11:21:59.647Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-27T11:22:00.592Z] Copying: 858/1024 [MB] (17 MBps) [2024-11-27T11:22:01.533Z] Copying: 872/1024 [MB] (14 MBps) [2024-11-27T11:22:02.478Z] Copying: 903992/1048576 [kB] (10232 kBps) [2024-11-27T11:22:03.423Z] Copying: 892/1024 [MB] (10 MBps) [2024-11-27T11:22:04.812Z] Copying: 905/1024 [MB] (12 MBps) [2024-11-27T11:22:05.756Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-27T11:22:06.699Z] Copying: 929/1024 [MB] (12 MBps) [2024-11-27T11:22:07.651Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-27T11:22:08.592Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-27T11:22:09.587Z] Copying: 960/1024 [MB] (10 MBps) [2024-11-27T11:22:10.527Z] Copying: 971/1024 [MB] (10 MBps) [2024-11-27T11:22:11.471Z] Copying: 986/1024 [MB] (15 MBps) [2024-11-27T11:22:12.417Z] Copying: 997/1024 [MB] (10 MBps) [2024-11-27T11:22:13.804Z] Copying: 1007/1024 [MB] (10 MBps) [2024-11-27T11:22:14.748Z] Copying: 1020/1024 [MB] (13 MBps) [2024-11-27T11:22:14.748Z] Copying: 1048568/1048576 [kB] (3136 kBps) [2024-11-27T11:22:14.748Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 11:22:14.434023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.434082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:45.865 [2024-11-27 11:22:14.434096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:45.865 [2024-11-27 11:22:14.434104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.434273] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:45.865 [2024-11-27 11:22:14.434810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.434833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:45.865 [2024-11-27 11:22:14.434843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:26:45.865 [2024-11-27 11:22:14.434850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.445969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.446029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:45.865 [2024-11-27 11:22:14.446041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.863 ms 00:26:45.865 [2024-11-27 11:22:14.446050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.468747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.468799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:45.865 [2024-11-27 11:22:14.468814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.680 ms 00:26:45.865 [2024-11-27 11:22:14.468825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.475203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.475237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:45.865 [2024-11-27 11:22:14.475247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.346 ms 00:26:45.865 [2024-11-27 11:22:14.475256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.477625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.477663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:45.865 [2024-11-27 11:22:14.477672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.313 ms 00:26:45.865 [2024-11-27 11:22:14.477679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.481185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.481228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:45.865 [2024-11-27 11:22:14.481238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.473 ms 00:26:45.865 [2024-11-27 11:22:14.481246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.704458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.704518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:45.865 [2024-11-27 11:22:14.704532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 223.173 ms 00:26:45.865 [2024-11-27 11:22:14.704541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.707932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.707985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:45.865 [2024-11-27 11:22:14.707995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:26:45.865 [2024-11-27 11:22:14.708003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.710883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.710948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:45.865 [2024-11-27 11:22:14.710959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:26:45.865 [2024-11-27 11:22:14.710968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.713527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.713579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:45.865 [2024-11-27 11:22:14.713589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:26:45.865 [2024-11-27 11:22:14.713597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.715812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.865 [2024-11-27 11:22:14.715863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:45.865 [2024-11-27 11:22:14.715873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:26:45.865 [2024-11-27 11:22:14.715882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.865 [2024-11-27 11:22:14.715939] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:45.865 [2024-11-27 11:22:14.715954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114688 / 261120 wr_cnt: 1 state: open 00:26:45.865 [2024-11-27 11:22:14.715966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:45.865 [2024-11-27 11:22:14.715975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:45.865 [2024-11-27 11:22:14.715984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:45.865 [2024-11-27 11:22:14.715992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:45.865 [2024-11-27 11:22:14.716001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:45.865 [2024-11-27 11:22:14.716009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:45.865 [2024-11-27 11:22:14.716017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:45.866 [2024-11-27 11:22:14.716736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:45.867 [2024-11-27 11:22:14.716744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:45.867 [2024-11-27 11:22:14.716756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:45.867 [2024-11-27 11:22:14.716764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:45.867 [2024-11-27 11:22:14.716780] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:45.867 [2024-11-27 11:22:14.716795] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0b653020-fbad-440a-9449-43326b5105a8 00:26:45.867 [2024-11-27 11:22:14.716809] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114688 00:26:45.867 [2024-11-27 11:22:14.716818] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115648 00:26:45.867 [2024-11-27 11:22:14.716826] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114688 00:26:45.867 [2024-11-27 11:22:14.716835] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:26:45.867 [2024-11-27 11:22:14.716842] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:45.867 [2024-11-27 11:22:14.716851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:45.867 [2024-11-27 11:22:14.716859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:45.867 [2024-11-27 11:22:14.716867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:45.867 [2024-11-27 11:22:14.716875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:45.867 [2024-11-27 11:22:14.716905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.867 [2024-11-27 11:22:14.716931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:45.867 [2024-11-27 11:22:14.716943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:26:45.867 [2024-11-27 11:22:14.716951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.719439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.867 [2024-11-27 11:22:14.719480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:45.867 [2024-11-27 11:22:14.719499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:26:45.867 [2024-11-27 11:22:14.719515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.719644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.867 [2024-11-27 11:22:14.719654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:45.867 [2024-11-27 11:22:14.719668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:26:45.867 [2024-11-27 11:22:14.719676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.726753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.867 [2024-11-27 11:22:14.726808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.867 [2024-11-27 11:22:14.726827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.867 [2024-11-27 11:22:14.726837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.726940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.867 [2024-11-27 11:22:14.726956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.867 [2024-11-27 11:22:14.726968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.867 [2024-11-27 11:22:14.726977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.727024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.867 [2024-11-27 11:22:14.727035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.867 [2024-11-27 11:22:14.727043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.867 [2024-11-27 11:22:14.727051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.727067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.867 [2024-11-27 11:22:14.727075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.867 [2024-11-27 11:22:14.727084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.867 [2024-11-27 11:22:14.727096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.867 [2024-11-27 11:22:14.740839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.867 [2024-11-27 11:22:14.740902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.867 [2024-11-27 11:22:14.740924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.867 [2024-11-27 11:22:14.740934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.128 [2024-11-27 11:22:14.751422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.128 [2024-11-27 11:22:14.751499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.128 [2024-11-27 11:22:14.751565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.128 [2024-11-27 11:22:14.751666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:46.128 [2024-11-27 11:22:14.751723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.128 [2024-11-27 11:22:14.751792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.751846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.128 [2024-11-27 11:22:14.751876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.128 [2024-11-27 11:22:14.751921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.128 [2024-11-27 11:22:14.751931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.128 [2024-11-27 11:22:14.752071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 320.614 ms, result 0 00:26:47.513 00:26:47.513 00:26:47.513 11:22:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:49.427 11:22:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:49.688 [2024-11-27 11:22:18.336359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:49.688 [2024-11-27 11:22:18.336512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91292 ] 00:26:49.688 [2024-11-27 11:22:18.489690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.688 [2024-11-27 11:22:18.540170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.949 [2024-11-27 11:22:18.649281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:49.949 [2024-11-27 11:22:18.649373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:49.949 [2024-11-27 11:22:18.811720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.811792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:49.949 [2024-11-27 11:22:18.811823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:49.949 [2024-11-27 11:22:18.811836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.949 [2024-11-27 11:22:18.811955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.811979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.949 [2024-11-27 11:22:18.812004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:49.949 [2024-11-27 11:22:18.812018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.949 [2024-11-27 11:22:18.812057] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:49.949 [2024-11-27 11:22:18.812755] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:49.949 [2024-11-27 11:22:18.812814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.812831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.949 [2024-11-27 11:22:18.812851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:26:49.949 [2024-11-27 11:22:18.812876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.949 [2024-11-27 11:22:18.814804] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:49.949 [2024-11-27 11:22:18.818856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.818934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:49.949 [2024-11-27 11:22:18.818952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.054 ms 00:26:49.949 [2024-11-27 11:22:18.818976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.949 [2024-11-27 11:22:18.819121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.819152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:49.949 [2024-11-27 11:22:18.819170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:49.949 [2024-11-27 11:22:18.819192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.949 [2024-11-27 11:22:18.828545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.828605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.949 [2024-11-27 11:22:18.828622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.286 ms 00:26:49.949 [2024-11-27 11:22:18.828639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.949 [2024-11-27 11:22:18.828768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.949 [2024-11-27 11:22:18.828792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.950 [2024-11-27 11:22:18.828808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:49.950 [2024-11-27 11:22:18.828828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.950 [2024-11-27 11:22:18.828980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.950 [2024-11-27 11:22:18.829007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:49.950 [2024-11-27 11:22:18.829032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:49.950 [2024-11-27 11:22:18.829045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.950 [2024-11-27 11:22:18.829087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:50.217 [2024-11-27 11:22:18.831323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-11-27 11:22:18.831376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.217 [2024-11-27 11:22:18.831391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:26:50.217 [2024-11-27 11:22:18.831409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-11-27 11:22:18.831480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-11-27 11:22:18.831500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:50.217 [2024-11-27 11:22:18.831515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:50.217 [2024-11-27 11:22:18.831529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-11-27 11:22:18.831560] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:50.217 [2024-11-27 11:22:18.831597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:50.217 [2024-11-27 11:22:18.831656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:50.217 [2024-11-27 11:22:18.831696] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:50.217 [2024-11-27 11:22:18.831825] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:50.217 [2024-11-27 11:22:18.831850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:50.217 [2024-11-27 11:22:18.831870] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:50.217 [2024-11-27 11:22:18.831908] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:50.217 [2024-11-27 11:22:18.831941] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:50.217 [2024-11-27 11:22:18.831964] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:50.217 [2024-11-27 11:22:18.831978] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:50.217 [2024-11-27 11:22:18.831989] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:50.217 [2024-11-27 11:22:18.832003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:50.217 [2024-11-27 11:22:18.832018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-11-27 11:22:18.832032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:50.217 [2024-11-27 11:22:18.832046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:26:50.217 [2024-11-27 11:22:18.832060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-11-27 11:22:18.832183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.217 [2024-11-27 11:22:18.832215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:50.217 [2024-11-27 11:22:18.832228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:50.217 [2024-11-27 11:22:18.832243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.217 [2024-11-27 11:22:18.832385] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:50.217 [2024-11-27 11:22:18.832409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:50.217 [2024-11-27 11:22:18.832425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:50.217 [2024-11-27 11:22:18.832450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:50.217 [2024-11-27 11:22:18.832480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:50.217 [2024-11-27 11:22:18.832507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:50.217 [2024-11-27 11:22:18.832520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:50.217 [2024-11-27 11:22:18.832545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:50.217 [2024-11-27 11:22:18.832564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:50.217 [2024-11-27 11:22:18.832575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:50.217 [2024-11-27 11:22:18.832586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:50.217 [2024-11-27 11:22:18.832599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:50.217 [2024-11-27 11:22:18.832610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:50.217 [2024-11-27 11:22:18.832636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:50.217 [2024-11-27 11:22:18.832648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:50.217 [2024-11-27 11:22:18.832679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:50.217 [2024-11-27 11:22:18.832708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:50.217 [2024-11-27 11:22:18.832722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:50.217 [2024-11-27 11:22:18.832734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:50.217 [2024-11-27 11:22:18.832745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:50.217 [2024-11-27 11:22:18.832759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:50.218 [2024-11-27 11:22:18.832770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:50.218 [2024-11-27 11:22:18.832782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:50.218 [2024-11-27 11:22:18.832794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:50.218 [2024-11-27 11:22:18.832805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:50.218 [2024-11-27 11:22:18.832818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:50.218 [2024-11-27 11:22:18.832830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:50.218 [2024-11-27 11:22:18.832849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:50.218 [2024-11-27 11:22:18.832860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:50.218 [2024-11-27 11:22:18.832872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:50.218 [2024-11-27 11:22:18.832906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:50.218 [2024-11-27 11:22:18.832934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:50.218 [2024-11-27 11:22:18.832948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:50.218 [2024-11-27 11:22:18.832959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:50.218 [2024-11-27 11:22:18.832971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:50.218 [2024-11-27 11:22:18.832985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:50.218 [2024-11-27 11:22:18.833002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:50.218 [2024-11-27 11:22:18.833013] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:50.218 [2024-11-27 11:22:18.833036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:50.218 [2024-11-27 11:22:18.833051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:50.218 [2024-11-27 11:22:18.833070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:50.218 [2024-11-27 11:22:18.833085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:50.218 [2024-11-27 11:22:18.833098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:50.218 [2024-11-27 11:22:18.833109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:50.218 [2024-11-27 11:22:18.833122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:50.218 [2024-11-27 11:22:18.833133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:50.218 [2024-11-27 11:22:18.833149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:50.218 [2024-11-27 11:22:18.833165] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:50.218 [2024-11-27 11:22:18.833182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:50.218 [2024-11-27 11:22:18.833217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:50.218 [2024-11-27 11:22:18.833229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:50.218 [2024-11-27 11:22:18.833244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:50.218 [2024-11-27 11:22:18.833257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:50.218 [2024-11-27 11:22:18.833270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:50.218 [2024-11-27 11:22:18.833283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:50.218 [2024-11-27 11:22:18.833295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:50.218 [2024-11-27 11:22:18.833308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:50.218 [2024-11-27 11:22:18.833320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:50.218 [2024-11-27 11:22:18.833397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:50.218 [2024-11-27 11:22:18.833412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:50.218 [2024-11-27 11:22:18.833444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:50.218 [2024-11-27 11:22:18.833458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:50.218 [2024-11-27 11:22:18.833470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:50.218 [2024-11-27 11:22:18.833485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.833498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:50.218 [2024-11-27 11:22:18.833512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:26:50.218 [2024-11-27 11:22:18.833524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.858030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.858109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.218 [2024-11-27 11:22:18.858138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.413 ms 00:26:50.218 [2024-11-27 11:22:18.858162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.858345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.858374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:50.218 [2024-11-27 11:22:18.858397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:26:50.218 [2024-11-27 11:22:18.858417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.870238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.870294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.218 [2024-11-27 11:22:18.870316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.696 ms 00:26:50.218 [2024-11-27 11:22:18.870328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.870379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.870393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.218 [2024-11-27 11:22:18.870406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:50.218 [2024-11-27 11:22:18.870417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.870982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.871035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.218 [2024-11-27 11:22:18.871050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:26:50.218 [2024-11-27 11:22:18.871063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.871247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.871279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.218 [2024-11-27 11:22:18.871295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:26:50.218 [2024-11-27 11:22:18.871317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.878163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.878214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.218 [2024-11-27 11:22:18.878237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:26:50.218 [2024-11-27 11:22:18.878249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.882075] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:50.218 [2024-11-27 11:22:18.882144] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:50.218 [2024-11-27 11:22:18.882165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.882178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:50.218 [2024-11-27 11:22:18.882194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.788 ms 00:26:50.218 [2024-11-27 11:22:18.882206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.898122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.898184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:50.218 [2024-11-27 11:22:18.898220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.814 ms 00:26:50.218 [2024-11-27 11:22:18.898233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.901175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.901231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:50.218 [2024-11-27 11:22:18.901247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:26:50.218 [2024-11-27 11:22:18.901268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.904094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.904151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:50.218 [2024-11-27 11:22:18.904166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:26:50.218 [2024-11-27 11:22:18.904177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.218 [2024-11-27 11:22:18.904592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.218 [2024-11-27 11:22:18.904631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:50.218 [2024-11-27 11:22:18.904658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:26:50.218 [2024-11-27 11:22:18.904671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.928709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.928782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:50.219 [2024-11-27 11:22:18.928808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.000 ms 00:26:50.219 [2024-11-27 11:22:18.928820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.937093] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:50.219 [2024-11-27 11:22:18.940036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.940086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:50.219 [2024-11-27 11:22:18.940115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.155 ms 00:26:50.219 [2024-11-27 11:22:18.940129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.940229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.940254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:50.219 [2024-11-27 11:22:18.940274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:50.219 [2024-11-27 11:22:18.940295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.942214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.942266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:50.219 [2024-11-27 11:22:18.942282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:26:50.219 [2024-11-27 11:22:18.942299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.942344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.942358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:50.219 [2024-11-27 11:22:18.942371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:50.219 [2024-11-27 11:22:18.942383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.942435] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:50.219 [2024-11-27 11:22:18.942459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.942472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:50.219 [2024-11-27 11:22:18.942485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:50.219 [2024-11-27 11:22:18.942498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.947644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.947697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:50.219 [2024-11-27 11:22:18.947714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:26:50.219 [2024-11-27 11:22:18.947725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.947830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.219 [2024-11-27 11:22:18.947848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:50.219 [2024-11-27 11:22:18.947864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:50.219 [2024-11-27 11:22:18.947877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.219 [2024-11-27 11:22:18.949181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.940 ms, result 0 00:26:51.263  [2024-11-27T11:22:21.534Z] Copying: 1068/1048576 [kB] (1068 kBps) [2024-11-27T11:22:22.470Z] Copying: 4692/1048576 [kB] (3624 kBps) [2024-11-27T11:22:23.414Z] Copying: 34/1024 [MB] (30 MBps) [2024-11-27T11:22:24.358Z] Copying: 68/1024 [MB] (33 MBps) [2024-11-27T11:22:25.301Z] Copying: 102/1024 [MB] (34 MBps) [2024-11-27T11:22:26.245Z] Copying: 134/1024 [MB] (32 MBps) [2024-11-27T11:22:27.187Z] Copying: 161/1024 [MB] (26 MBps) [2024-11-27T11:22:28.570Z] Copying: 188/1024 [MB] (27 MBps) [2024-11-27T11:22:29.142Z] Copying: 211/1024 [MB] (23 MBps) [2024-11-27T11:22:30.526Z] Copying: 243/1024 [MB] (31 MBps) [2024-11-27T11:22:31.462Z] Copying: 274/1024 [MB] (30 MBps) [2024-11-27T11:22:32.405Z] Copying: 298/1024 [MB] (24 MBps) [2024-11-27T11:22:33.346Z] Copying: 326/1024 [MB] (28 MBps) [2024-11-27T11:22:34.291Z] Copying: 355/1024 [MB] (28 MBps) [2024-11-27T11:22:35.229Z] Copying: 386/1024 [MB] (31 MBps) [2024-11-27T11:22:36.169Z] Copying: 409/1024 [MB] (22 MBps) [2024-11-27T11:22:37.555Z] Copying: 444/1024 [MB] (34 MBps) [2024-11-27T11:22:38.497Z] Copying: 472/1024 [MB] (27 MBps) [2024-11-27T11:22:39.459Z] Copying: 490/1024 [MB] (18 MBps) [2024-11-27T11:22:40.396Z] Copying: 521/1024 [MB] (30 MBps) [2024-11-27T11:22:41.338Z] Copying: 554/1024 [MB] (33 MBps) [2024-11-27T11:22:42.277Z] Copying: 583/1024 [MB] (28 MBps) [2024-11-27T11:22:43.214Z] Copying: 612/1024 [MB] (28 MBps) [2024-11-27T11:22:44.156Z] Copying: 652/1024 [MB] (40 MBps) [2024-11-27T11:22:45.541Z] Copying: 686/1024 [MB] (33 MBps) [2024-11-27T11:22:46.484Z] Copying: 715/1024 [MB] (29 MBps) [2024-11-27T11:22:47.427Z] Copying: 740/1024 [MB] (24 MBps) [2024-11-27T11:22:48.371Z] Copying: 765/1024 [MB] (25 MBps) [2024-11-27T11:22:49.315Z] Copying: 792/1024 [MB] (27 MBps) [2024-11-27T11:22:50.256Z] Copying: 821/1024 [MB] (29 MBps) [2024-11-27T11:22:51.290Z] Copying: 848/1024 [MB] (26 MBps) [2024-11-27T11:22:52.233Z] Copying: 880/1024 [MB] (31 MBps) [2024-11-27T11:22:53.177Z] Copying: 902/1024 [MB] (22 MBps) [2024-11-27T11:22:54.562Z] Copying: 924/1024 [MB] (22 MBps) [2024-11-27T11:22:55.501Z] Copying: 956/1024 [MB] (32 MBps) [2024-11-27T11:22:56.440Z] Copying: 991/1024 [MB] (34 MBps) [2024-11-27T11:22:56.440Z] Copying: 1018/1024 [MB] (26 MBps) [2024-11-27T11:22:56.701Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-27 11:22:56.447803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.818 [2024-11-27 11:22:56.447945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:27.818 [2024-11-27 11:22:56.447973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:27.818 [2024-11-27 11:22:56.447991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.818 [2024-11-27 11:22:56.448032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:27.818 [2024-11-27 11:22:56.448686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.818 [2024-11-27 11:22:56.448873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:27.818 [2024-11-27 11:22:56.448933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:27:27.818 [2024-11-27 11:22:56.449001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.818 [2024-11-27 11:22:56.449449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.818 [2024-11-27 11:22:56.449469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:27.818 [2024-11-27 11:22:56.449485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:27:27.818 [2024-11-27 11:22:56.449500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.818 [2024-11-27 11:22:56.462591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.462627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:27.819 [2024-11-27 11:22:56.462637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:27:27.819 [2024-11-27 11:22:56.462784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.467490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.467519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:27.819 [2024-11-27 11:22:56.467528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:27:27.819 [2024-11-27 11:22:56.467536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.468685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.468718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:27.819 [2024-11-27 11:22:56.468726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:27:27.819 [2024-11-27 11:22:56.468735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.472337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.472371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:27.819 [2024-11-27 11:22:56.472379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:27:27.819 [2024-11-27 11:22:56.472390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.474285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.474314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:27.819 [2024-11-27 11:22:56.474328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:27:27.819 [2024-11-27 11:22:56.474334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.476425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.476455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:27.819 [2024-11-27 11:22:56.476463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:27:27.819 [2024-11-27 11:22:56.476468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.477984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.478022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:27.819 [2024-11-27 11:22:56.478029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:27:27.819 [2024-11-27 11:22:56.478035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.479220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.479249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:27.819 [2024-11-27 11:22:56.479256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:27:27.819 [2024-11-27 11:22:56.479262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.480289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.819 [2024-11-27 11:22:56.480318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:27.819 [2024-11-27 11:22:56.480326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:27:27.819 [2024-11-27 11:22:56.480332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.819 [2024-11-27 11:22:56.480355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:27.819 [2024-11-27 11:22:56.480365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:27.819 [2024-11-27 11:22:56.480373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:27.819 [2024-11-27 11:22:56.480380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:27.819 [2024-11-27 11:22:56.480811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.480993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:27.820 [2024-11-27 11:22:56.481006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:27.820 [2024-11-27 11:22:56.481013] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0b653020-fbad-440a-9449-43326b5105a8 00:27:27.820 [2024-11-27 11:22:56.481019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:27.820 [2024-11-27 11:22:56.481029] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 149952 00:27:27.820 [2024-11-27 11:22:56.481035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 147968 00:27:27.820 [2024-11-27 11:22:56.481041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0134 00:27:27.820 [2024-11-27 11:22:56.481047] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:27.820 [2024-11-27 11:22:56.481053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:27.820 [2024-11-27 11:22:56.481059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:27.820 [2024-11-27 11:22:56.481064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:27.820 [2024-11-27 11:22:56.481069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:27.820 [2024-11-27 11:22:56.481075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.820 [2024-11-27 11:22:56.481081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:27.820 [2024-11-27 11:22:56.481091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:27:27.820 [2024-11-27 11:22:56.481098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.482515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.820 [2024-11-27 11:22:56.482545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:27.820 [2024-11-27 11:22:56.482553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:27:27.820 [2024-11-27 11:22:56.482559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.482637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.820 [2024-11-27 11:22:56.482644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:27.820 [2024-11-27 11:22:56.482655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:27.820 [2024-11-27 11:22:56.482662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.486833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.486863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:27.820 [2024-11-27 11:22:56.486871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.486877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.486947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.486955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:27.820 [2024-11-27 11:22:56.486962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.486970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.486999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.487006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:27.820 [2024-11-27 11:22:56.487016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.487023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.487037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.487045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:27.820 [2024-11-27 11:22:56.487051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.487057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.495105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.495141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:27.820 [2024-11-27 11:22:56.495150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.495157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.501677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.501712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:27.820 [2024-11-27 11:22:56.501721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.501727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.501766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.501773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:27.820 [2024-11-27 11:22:56.501779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.501785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.501804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.501810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:27.820 [2024-11-27 11:22:56.501816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.501821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.501870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.501880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:27.820 [2024-11-27 11:22:56.501898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.501904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.501930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.501936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:27.820 [2024-11-27 11:22:56.501942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.501949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.501980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.501990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:27.820 [2024-11-27 11:22:56.501997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.502002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.502034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:27.820 [2024-11-27 11:22:56.502041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:27.820 [2024-11-27 11:22:56.502047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:27.820 [2024-11-27 11:22:56.502053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.820 [2024-11-27 11:22:56.502150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.345 ms, result 0 00:27:27.820 00:27:27.820 00:27:27.820 11:22:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:29.755 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:29.755 11:22:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:29.755 [2024-11-27 11:22:58.318858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:29.755 [2024-11-27 11:22:58.318962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91700 ] 00:27:29.755 [2024-11-27 11:22:58.459082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.755 [2024-11-27 11:22:58.488489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.755 [2024-11-27 11:22:58.569207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:29.755 [2024-11-27 11:22:58.569263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:30.015 [2024-11-27 11:22:58.722092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.722126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:30.015 [2024-11-27 11:22:58.722141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:30.015 [2024-11-27 11:22:58.722147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.722182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.722190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:30.015 [2024-11-27 11:22:58.722196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:30.015 [2024-11-27 11:22:58.722202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.722216] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:30.015 [2024-11-27 11:22:58.722398] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:30.015 [2024-11-27 11:22:58.722412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.722417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:30.015 [2024-11-27 11:22:58.722424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:27:30.015 [2024-11-27 11:22:58.722431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.723355] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:30.015 [2024-11-27 11:22:58.725493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.725522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:30.015 [2024-11-27 11:22:58.725530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.139 ms 00:27:30.015 [2024-11-27 11:22:58.725541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.725584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.725592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:30.015 [2024-11-27 11:22:58.725600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:30.015 [2024-11-27 11:22:58.725605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.729950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.729974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:30.015 [2024-11-27 11:22:58.729982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.308 ms 00:27:30.015 [2024-11-27 11:22:58.729995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.730056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.730064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:30.015 [2024-11-27 11:22:58.730070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:30.015 [2024-11-27 11:22:58.730078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.730111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.730119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:30.015 [2024-11-27 11:22:58.730125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:30.015 [2024-11-27 11:22:58.730130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.730150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:30.015 [2024-11-27 11:22:58.731281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.731304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:30.015 [2024-11-27 11:22:58.731311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:27:30.015 [2024-11-27 11:22:58.731316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.731341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.731348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:30.015 [2024-11-27 11:22:58.731354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:30.015 [2024-11-27 11:22:58.731359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.731379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:30.015 [2024-11-27 11:22:58.731399] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:30.015 [2024-11-27 11:22:58.731425] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:30.015 [2024-11-27 11:22:58.731438] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:30.015 [2024-11-27 11:22:58.731517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:30.015 [2024-11-27 11:22:58.731525] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:30.015 [2024-11-27 11:22:58.731536] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:30.015 [2024-11-27 11:22:58.731543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:30.015 [2024-11-27 11:22:58.731552] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:30.015 [2024-11-27 11:22:58.731558] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:30.015 [2024-11-27 11:22:58.731563] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:30.015 [2024-11-27 11:22:58.731569] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:30.015 [2024-11-27 11:22:58.731574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:30.015 [2024-11-27 11:22:58.731581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.731586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:30.015 [2024-11-27 11:22:58.731592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:27:30.015 [2024-11-27 11:22:58.731597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.731664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.015 [2024-11-27 11:22:58.731671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:30.015 [2024-11-27 11:22:58.731677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:30.015 [2024-11-27 11:22:58.731683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.015 [2024-11-27 11:22:58.731754] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:30.015 [2024-11-27 11:22:58.731761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:30.015 [2024-11-27 11:22:58.731767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.015 [2024-11-27 11:22:58.731777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.015 [2024-11-27 11:22:58.731783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:30.015 [2024-11-27 11:22:58.731788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:30.015 [2024-11-27 11:22:58.731793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:30.015 [2024-11-27 11:22:58.731798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:30.015 [2024-11-27 11:22:58.731803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:30.015 [2024-11-27 11:22:58.731808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.015 [2024-11-27 11:22:58.731813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:30.015 [2024-11-27 11:22:58.731820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:30.015 [2024-11-27 11:22:58.731825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.015 [2024-11-27 11:22:58.731830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:30.015 [2024-11-27 11:22:58.731835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:30.015 [2024-11-27 11:22:58.731841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.015 [2024-11-27 11:22:58.731845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:30.015 [2024-11-27 11:22:58.731850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:30.015 [2024-11-27 11:22:58.731855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.015 [2024-11-27 11:22:58.731860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:30.015 [2024-11-27 11:22:58.731865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:30.015 [2024-11-27 11:22:58.731870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.016 [2024-11-27 11:22:58.731875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:30.016 [2024-11-27 11:22:58.731880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:30.016 [2024-11-27 11:22:58.731885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.016 [2024-11-27 11:22:58.731900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:30.016 [2024-11-27 11:22:58.731905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:30.016 [2024-11-27 11:22:58.731914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.016 [2024-11-27 11:22:58.731919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:30.016 [2024-11-27 11:22:58.731924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:30.016 [2024-11-27 11:22:58.731929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.016 [2024-11-27 11:22:58.731935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:30.016 [2024-11-27 11:22:58.731940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:30.016 [2024-11-27 11:22:58.731946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.016 [2024-11-27 11:22:58.731952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:30.016 [2024-11-27 11:22:58.731957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:30.016 [2024-11-27 11:22:58.731963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.016 [2024-11-27 11:22:58.731969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:30.016 [2024-11-27 11:22:58.731975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:30.016 [2024-11-27 11:22:58.731981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.016 [2024-11-27 11:22:58.731994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:30.016 [2024-11-27 11:22:58.732000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:30.016 [2024-11-27 11:22:58.732005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.016 [2024-11-27 11:22:58.732015] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:30.016 [2024-11-27 11:22:58.732022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:30.016 [2024-11-27 11:22:58.732028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.016 [2024-11-27 11:22:58.732035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.016 [2024-11-27 11:22:58.732043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:30.016 [2024-11-27 11:22:58.732049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:30.016 [2024-11-27 11:22:58.732056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:30.016 [2024-11-27 11:22:58.732062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:30.016 [2024-11-27 11:22:58.732068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:30.016 [2024-11-27 11:22:58.732074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:30.016 [2024-11-27 11:22:58.732081] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:30.016 [2024-11-27 11:22:58.732088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:30.016 [2024-11-27 11:22:58.732106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:30.016 [2024-11-27 11:22:58.732112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:30.016 [2024-11-27 11:22:58.732118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:30.016 [2024-11-27 11:22:58.732126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:30.016 [2024-11-27 11:22:58.732132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:30.016 [2024-11-27 11:22:58.732139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:30.016 [2024-11-27 11:22:58.732145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:30.016 [2024-11-27 11:22:58.732151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:30.016 [2024-11-27 11:22:58.732157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:30.016 [2024-11-27 11:22:58.732188] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:30.016 [2024-11-27 11:22:58.732195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.016 [2024-11-27 11:22:58.732209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:30.016 [2024-11-27 11:22:58.732215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:30.016 [2024-11-27 11:22:58.732221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:30.016 [2024-11-27 11:22:58.732230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.732236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:30.016 [2024-11-27 11:22:58.732243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:27:30.016 [2024-11-27 11:22:58.732249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.751871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.751921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:30.016 [2024-11-27 11:22:58.751939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.588 ms 00:27:30.016 [2024-11-27 11:22:58.751947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.752032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.752045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:30.016 [2024-11-27 11:22:58.752053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:30.016 [2024-11-27 11:22:58.752060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.761285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.761325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:30.016 [2024-11-27 11:22:58.761338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.173 ms 00:27:30.016 [2024-11-27 11:22:58.761348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.761384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.761396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:30.016 [2024-11-27 11:22:58.761408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:30.016 [2024-11-27 11:22:58.761418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.761771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.761806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:30.016 [2024-11-27 11:22:58.761818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:27:30.016 [2024-11-27 11:22:58.761830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.762014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.762035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:30.016 [2024-11-27 11:22:58.762053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:27:30.016 [2024-11-27 11:22:58.762065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.766790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.766818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:30.016 [2024-11-27 11:22:58.766825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:27:30.016 [2024-11-27 11:22:58.766831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.769194] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:30.016 [2024-11-27 11:22:58.769224] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:30.016 [2024-11-27 11:22:58.769232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.769239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:30.016 [2024-11-27 11:22:58.769246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:27:30.016 [2024-11-27 11:22:58.769251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.780367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.780398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:30.016 [2024-11-27 11:22:58.780408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.086 ms 00:27:30.016 [2024-11-27 11:22:58.780414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.782166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.016 [2024-11-27 11:22:58.782190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:30.016 [2024-11-27 11:22:58.782197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:27:30.016 [2024-11-27 11:22:58.782203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.016 [2024-11-27 11:22:58.783717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.783742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:30.017 [2024-11-27 11:22:58.783748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:27:30.017 [2024-11-27 11:22:58.783754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.784010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.784020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:30.017 [2024-11-27 11:22:58.784027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:27:30.017 [2024-11-27 11:22:58.784033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.798509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.798548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:30.017 [2024-11-27 11:22:58.798557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.462 ms 00:27:30.017 [2024-11-27 11:22:58.798564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.804292] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:30.017 [2024-11-27 11:22:58.806264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.806288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:30.017 [2024-11-27 11:22:58.806301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.666 ms 00:27:30.017 [2024-11-27 11:22:58.806307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.806344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.806357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:30.017 [2024-11-27 11:22:58.806364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:30.017 [2024-11-27 11:22:58.806372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.806843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.806864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:30.017 [2024-11-27 11:22:58.806871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:27:30.017 [2024-11-27 11:22:58.806879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.806921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.806928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:30.017 [2024-11-27 11:22:58.806938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:30.017 [2024-11-27 11:22:58.806944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.806968] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:30.017 [2024-11-27 11:22:58.806978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.806984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:30.017 [2024-11-27 11:22:58.806991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:30.017 [2024-11-27 11:22:58.806996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.810442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.810469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:30.017 [2024-11-27 11:22:58.810477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:27:30.017 [2024-11-27 11:22:58.810483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.810537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.017 [2024-11-27 11:22:58.810544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:30.017 [2024-11-27 11:22:58.810550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:30.017 [2024-11-27 11:22:58.810556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.017 [2024-11-27 11:22:58.811526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.124 ms, result 0 00:27:31.398  [2024-11-27T11:23:01.219Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-27T11:23:02.158Z] Copying: 38/1024 [MB] (14 MBps) [2024-11-27T11:23:03.100Z] Copying: 56/1024 [MB] (17 MBps) [2024-11-27T11:23:04.040Z] Copying: 74/1024 [MB] (18 MBps) [2024-11-27T11:23:04.982Z] Copying: 96/1024 [MB] (21 MBps) [2024-11-27T11:23:06.364Z] Copying: 117/1024 [MB] (21 MBps) [2024-11-27T11:23:07.308Z] Copying: 141/1024 [MB] (23 MBps) [2024-11-27T11:23:08.247Z] Copying: 161/1024 [MB] (20 MBps) [2024-11-27T11:23:09.189Z] Copying: 179/1024 [MB] (17 MBps) [2024-11-27T11:23:10.135Z] Copying: 200/1024 [MB] (20 MBps) [2024-11-27T11:23:11.075Z] Copying: 221/1024 [MB] (20 MBps) [2024-11-27T11:23:12.021Z] Copying: 240/1024 [MB] (19 MBps) [2024-11-27T11:23:12.966Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-27T11:23:14.353Z] Copying: 265/1024 [MB] (14 MBps) [2024-11-27T11:23:15.298Z] Copying: 282/1024 [MB] (17 MBps) [2024-11-27T11:23:16.244Z] Copying: 301/1024 [MB] (18 MBps) [2024-11-27T11:23:17.187Z] Copying: 313/1024 [MB] (11 MBps) [2024-11-27T11:23:18.158Z] Copying: 330/1024 [MB] (16 MBps) [2024-11-27T11:23:19.101Z] Copying: 352/1024 [MB] (22 MBps) [2024-11-27T11:23:20.094Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-27T11:23:21.033Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-27T11:23:21.976Z] Copying: 395/1024 [MB] (21 MBps) [2024-11-27T11:23:23.455Z] Copying: 416/1024 [MB] (20 MBps) [2024-11-27T11:23:24.028Z] Copying: 436/1024 [MB] (19 MBps) [2024-11-27T11:23:24.973Z] Copying: 446/1024 [MB] (10 MBps) [2024-11-27T11:23:26.362Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-27T11:23:27.305Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-27T11:23:28.250Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-27T11:23:29.196Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-27T11:23:30.139Z] Copying: 497/1024 [MB] (10 MBps) [2024-11-27T11:23:31.085Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-27T11:23:32.033Z] Copying: 522/1024 [MB] (14 MBps) [2024-11-27T11:23:32.979Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-27T11:23:34.368Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-27T11:23:35.315Z] Copying: 554/1024 [MB] (11 MBps) [2024-11-27T11:23:36.260Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-27T11:23:37.206Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-27T11:23:38.152Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-27T11:23:39.098Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-27T11:23:40.044Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-27T11:23:40.991Z] Copying: 618/1024 [MB] (10 MBps) [2024-11-27T11:23:42.380Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-27T11:23:42.952Z] Copying: 641/1024 [MB] (12 MBps) [2024-11-27T11:23:44.340Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-27T11:23:45.285Z] Copying: 662/1024 [MB] (10 MBps) [2024-11-27T11:23:46.227Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-27T11:23:47.170Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-27T11:23:48.111Z] Copying: 699/1024 [MB] (16 MBps) [2024-11-27T11:23:49.058Z] Copying: 728/1024 [MB] (28 MBps) [2024-11-27T11:23:50.002Z] Copying: 751/1024 [MB] (23 MBps) [2024-11-27T11:23:50.946Z] Copying: 771/1024 [MB] (19 MBps) [2024-11-27T11:23:52.333Z] Copying: 790/1024 [MB] (19 MBps) [2024-11-27T11:23:53.278Z] Copying: 802/1024 [MB] (11 MBps) [2024-11-27T11:23:54.222Z] Copying: 819/1024 [MB] (16 MBps) [2024-11-27T11:23:55.217Z] Copying: 834/1024 [MB] (15 MBps) [2024-11-27T11:23:56.185Z] Copying: 851/1024 [MB] (17 MBps) [2024-11-27T11:23:57.132Z] Copying: 867/1024 [MB] (16 MBps) [2024-11-27T11:23:58.074Z] Copying: 880/1024 [MB] (13 MBps) [2024-11-27T11:23:59.019Z] Copying: 891/1024 [MB] (10 MBps) [2024-11-27T11:23:59.963Z] Copying: 906/1024 [MB] (15 MBps) [2024-11-27T11:24:01.352Z] Copying: 924/1024 [MB] (17 MBps) [2024-11-27T11:24:02.296Z] Copying: 946/1024 [MB] (22 MBps) [2024-11-27T11:24:03.242Z] Copying: 966/1024 [MB] (19 MBps) [2024-11-27T11:24:04.184Z] Copying: 987/1024 [MB] (21 MBps) [2024-11-27T11:24:05.129Z] Copying: 1006/1024 [MB] (19 MBps) [2024-11-27T11:24:05.129Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-27T11:24:05.129Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 11:24:05.031148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.246 [2024-11-27 11:24:05.031218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:36.246 [2024-11-27 11:24:05.031234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.246 [2024-11-27 11:24:05.031251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.246 [2024-11-27 11:24:05.031273] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:36.246 [2024-11-27 11:24:05.032076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.032105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:36.247 [2024-11-27 11:24:05.032118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:28:36.247 [2024-11-27 11:24:05.032127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.032360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.032372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:36.247 [2024-11-27 11:24:05.032382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:28:36.247 [2024-11-27 11:24:05.032390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.036320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.036350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:36.247 [2024-11-27 11:24:05.036361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.910 ms 00:28:36.247 [2024-11-27 11:24:05.036369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.043229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.043275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:36.247 [2024-11-27 11:24:05.043287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:28:36.247 [2024-11-27 11:24:05.043296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.046495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.046548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:36.247 [2024-11-27 11:24:05.046559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:28:36.247 [2024-11-27 11:24:05.046566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.051827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.051922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:36.247 [2024-11-27 11:24:05.051941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.213 ms 00:28:36.247 [2024-11-27 11:24:05.051953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.057318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.057386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:36.247 [2024-11-27 11:24:05.057397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:28:36.247 [2024-11-27 11:24:05.057406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.060594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.060650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:36.247 [2024-11-27 11:24:05.060662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:28:36.247 [2024-11-27 11:24:05.060670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.063489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.063541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:36.247 [2024-11-27 11:24:05.063551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:28:36.247 [2024-11-27 11:24:05.063559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.065957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.066006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:36.247 [2024-11-27 11:24:05.066017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:28:36.247 [2024-11-27 11:24:05.066024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.068231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.247 [2024-11-27 11:24:05.068279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:36.247 [2024-11-27 11:24:05.068291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:28:36.247 [2024-11-27 11:24:05.068299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.247 [2024-11-27 11:24:05.068339] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:36.247 [2024-11-27 11:24:05.068358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:36.247 [2024-11-27 11:24:05.068371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:36.247 [2024-11-27 11:24:05.068382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:36.247 [2024-11-27 11:24:05.068701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.068992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:36.248 [2024-11-27 11:24:05.069203] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:36.248 [2024-11-27 11:24:05.069222] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0b653020-fbad-440a-9449-43326b5105a8 00:28:36.248 [2024-11-27 11:24:05.069232] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:36.248 [2024-11-27 11:24:05.069240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:36.248 [2024-11-27 11:24:05.069248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:36.248 [2024-11-27 11:24:05.069257] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:36.248 [2024-11-27 11:24:05.069265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:36.248 [2024-11-27 11:24:05.069273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:36.248 [2024-11-27 11:24:05.069281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:36.248 [2024-11-27 11:24:05.069288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:36.248 [2024-11-27 11:24:05.069294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:36.248 [2024-11-27 11:24:05.069302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.248 [2024-11-27 11:24:05.069329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:36.248 [2024-11-27 11:24:05.069340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:28:36.248 [2024-11-27 11:24:05.069348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.071675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.248 [2024-11-27 11:24:05.071717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:36.248 [2024-11-27 11:24:05.071729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:28:36.248 [2024-11-27 11:24:05.071738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.071872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.248 [2024-11-27 11:24:05.071883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:36.248 [2024-11-27 11:24:05.071910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:28:36.248 [2024-11-27 11:24:05.071919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.079033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.248 [2024-11-27 11:24:05.079082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.248 [2024-11-27 11:24:05.079093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.248 [2024-11-27 11:24:05.079101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.079164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.248 [2024-11-27 11:24:05.079174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.248 [2024-11-27 11:24:05.079182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.248 [2024-11-27 11:24:05.079197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.079263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.248 [2024-11-27 11:24:05.079275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.248 [2024-11-27 11:24:05.079287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.248 [2024-11-27 11:24:05.079295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.079315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.248 [2024-11-27 11:24:05.079323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.248 [2024-11-27 11:24:05.079331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.248 [2024-11-27 11:24:05.079338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.248 [2024-11-27 11:24:05.093398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.248 [2024-11-27 11:24:05.093450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.248 [2024-11-27 11:24:05.093462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.248 [2024-11-27 11:24:05.093470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.103856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.103919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.249 [2024-11-27 11:24:05.103941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.103949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.103999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.104008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.249 [2024-11-27 11:24:05.104017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.104025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.104061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.104074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.249 [2024-11-27 11:24:05.104082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.104091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.104161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.104172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.249 [2024-11-27 11:24:05.104180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.104188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.104216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.104226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:36.249 [2024-11-27 11:24:05.104238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.104246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.104288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.104297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.249 [2024-11-27 11:24:05.104306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.104314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.104358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.249 [2024-11-27 11:24:05.104384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.249 [2024-11-27 11:24:05.104393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.249 [2024-11-27 11:24:05.104402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.249 [2024-11-27 11:24:05.104538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.352 ms, result 0 00:28:36.510 00:28:36.510 00:28:36.510 11:24:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:39.060 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:39.060 Process with pid 89643 is not found 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89643 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89643 ']' 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89643 00:28:39.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89643) - No such process 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89643 is not found' 00:28:39.060 11:24:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:39.322 Remove shared memory files 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:39.322 ************************************ 00:28:39.322 END TEST ftl_dirty_shutdown 00:28:39.322 ************************************ 00:28:39.322 00:28:39.322 real 4m24.636s 00:28:39.322 user 4m58.433s 00:28:39.322 sys 0m29.376s 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.322 11:24:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.322 11:24:08 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:39.322 11:24:08 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:39.322 11:24:08 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.322 11:24:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:39.584 ************************************ 00:28:39.584 START TEST ftl_upgrade_shutdown 00:28:39.584 ************************************ 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:39.584 * Looking for test storage... 00:28:39.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.584 --rc genhtml_branch_coverage=1 00:28:39.584 --rc genhtml_function_coverage=1 00:28:39.584 --rc genhtml_legend=1 00:28:39.584 --rc geninfo_all_blocks=1 00:28:39.584 --rc geninfo_unexecuted_blocks=1 00:28:39.584 00:28:39.584 ' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.584 --rc genhtml_branch_coverage=1 00:28:39.584 --rc genhtml_function_coverage=1 00:28:39.584 --rc genhtml_legend=1 00:28:39.584 --rc geninfo_all_blocks=1 00:28:39.584 --rc geninfo_unexecuted_blocks=1 00:28:39.584 00:28:39.584 ' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.584 --rc genhtml_branch_coverage=1 00:28:39.584 --rc genhtml_function_coverage=1 00:28:39.584 --rc genhtml_legend=1 00:28:39.584 --rc geninfo_all_blocks=1 00:28:39.584 --rc geninfo_unexecuted_blocks=1 00:28:39.584 00:28:39.584 ' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:39.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.584 --rc genhtml_branch_coverage=1 00:28:39.584 --rc genhtml_function_coverage=1 00:28:39.584 --rc genhtml_legend=1 00:28:39.584 --rc geninfo_all_blocks=1 00:28:39.584 --rc geninfo_unexecuted_blocks=1 00:28:39.584 00:28:39.584 ' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:39.584 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92481 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92481 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92481 ']' 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.585 11:24:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.847 [2024-11-27 11:24:08.495703] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:39.847 [2024-11-27 11:24:08.495966] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92481 ] 00:28:39.847 [2024-11-27 11:24:08.655056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.847 [2024-11-27 11:24:08.709219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:40.797 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:41.059 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:41.059 { 00:28:41.059 "name": "basen1", 00:28:41.059 "aliases": [ 00:28:41.059 "e9b7066b-147a-48df-80dc-0dbc8867b2fd" 00:28:41.059 ], 00:28:41.059 "product_name": "NVMe disk", 00:28:41.059 "block_size": 4096, 00:28:41.059 "num_blocks": 1310720, 00:28:41.059 "uuid": "e9b7066b-147a-48df-80dc-0dbc8867b2fd", 00:28:41.059 "numa_id": -1, 00:28:41.059 "assigned_rate_limits": { 00:28:41.059 "rw_ios_per_sec": 0, 00:28:41.059 "rw_mbytes_per_sec": 0, 00:28:41.059 "r_mbytes_per_sec": 0, 00:28:41.059 "w_mbytes_per_sec": 0 00:28:41.059 }, 00:28:41.059 "claimed": true, 00:28:41.059 "claim_type": "read_many_write_one", 00:28:41.059 "zoned": false, 00:28:41.059 "supported_io_types": { 00:28:41.059 "read": true, 00:28:41.059 "write": true, 00:28:41.059 "unmap": true, 00:28:41.059 "flush": true, 00:28:41.059 "reset": true, 00:28:41.059 "nvme_admin": true, 00:28:41.059 "nvme_io": true, 00:28:41.059 "nvme_io_md": false, 00:28:41.059 "write_zeroes": true, 00:28:41.059 "zcopy": false, 00:28:41.059 "get_zone_info": false, 00:28:41.059 "zone_management": false, 00:28:41.059 "zone_append": false, 00:28:41.059 "compare": true, 00:28:41.059 "compare_and_write": false, 00:28:41.059 "abort": true, 00:28:41.059 "seek_hole": false, 00:28:41.059 "seek_data": false, 00:28:41.059 "copy": true, 00:28:41.059 "nvme_iov_md": false 00:28:41.059 }, 00:28:41.059 "driver_specific": { 00:28:41.059 "nvme": [ 00:28:41.059 { 00:28:41.059 "pci_address": "0000:00:11.0", 00:28:41.059 "trid": { 00:28:41.059 "trtype": "PCIe", 00:28:41.059 "traddr": "0000:00:11.0" 00:28:41.059 }, 00:28:41.059 "ctrlr_data": { 00:28:41.059 "cntlid": 0, 00:28:41.059 "vendor_id": "0x1b36", 00:28:41.059 "model_number": "QEMU NVMe Ctrl", 00:28:41.059 "serial_number": "12341", 00:28:41.059 "firmware_revision": "8.0.0", 00:28:41.059 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:41.059 "oacs": { 00:28:41.059 "security": 0, 00:28:41.059 "format": 1, 00:28:41.059 "firmware": 0, 00:28:41.059 "ns_manage": 1 00:28:41.059 }, 00:28:41.059 "multi_ctrlr": false, 00:28:41.059 "ana_reporting": false 00:28:41.059 }, 00:28:41.059 "vs": { 00:28:41.059 "nvme_version": "1.4" 00:28:41.059 }, 00:28:41.059 "ns_data": { 00:28:41.059 "id": 1, 00:28:41.059 "can_share": false 00:28:41.059 } 00:28:41.059 } 00:28:41.059 ], 00:28:41.059 "mp_policy": "active_passive" 00:28:41.059 } 00:28:41.059 } 00:28:41.059 ]' 00:28:41.060 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:41.321 11:24:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:41.582 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a636d187-dc8f-4890-b92e-9cf690e6d302 00:28:41.583 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:41.583 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a636d187-dc8f-4890-b92e-9cf690e6d302 00:28:41.583 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:41.843 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f970c4b1-f33b-4051-a481-b0f7154c382e 00:28:41.843 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f970c4b1-f33b-4051-a481-b0f7154c382e 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=87ed9dca-e14f-4ce2-ae01-a66d6b49403e 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 87ed9dca-e14f-4ce2-ae01-a66d6b49403e ]] 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 87ed9dca-e14f-4ce2-ae01-a66d6b49403e 5120 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=87ed9dca-e14f-4ce2-ae01-a66d6b49403e 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 87ed9dca-e14f-4ce2-ae01-a66d6b49403e 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=87ed9dca-e14f-4ce2-ae01-a66d6b49403e 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:42.105 11:24:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 87ed9dca-e14f-4ce2-ae01-a66d6b49403e 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:42.367 { 00:28:42.367 "name": "87ed9dca-e14f-4ce2-ae01-a66d6b49403e", 00:28:42.367 "aliases": [ 00:28:42.367 "lvs/basen1p0" 00:28:42.367 ], 00:28:42.367 "product_name": "Logical Volume", 00:28:42.367 "block_size": 4096, 00:28:42.367 "num_blocks": 5242880, 00:28:42.367 "uuid": "87ed9dca-e14f-4ce2-ae01-a66d6b49403e", 00:28:42.367 "assigned_rate_limits": { 00:28:42.367 "rw_ios_per_sec": 0, 00:28:42.367 "rw_mbytes_per_sec": 0, 00:28:42.367 "r_mbytes_per_sec": 0, 00:28:42.367 "w_mbytes_per_sec": 0 00:28:42.367 }, 00:28:42.367 "claimed": false, 00:28:42.367 "zoned": false, 00:28:42.367 "supported_io_types": { 00:28:42.367 "read": true, 00:28:42.367 "write": true, 00:28:42.367 "unmap": true, 00:28:42.367 "flush": false, 00:28:42.367 "reset": true, 00:28:42.367 "nvme_admin": false, 00:28:42.367 "nvme_io": false, 00:28:42.367 "nvme_io_md": false, 00:28:42.367 "write_zeroes": true, 00:28:42.367 "zcopy": false, 00:28:42.367 "get_zone_info": false, 00:28:42.367 "zone_management": false, 00:28:42.367 "zone_append": false, 00:28:42.367 "compare": false, 00:28:42.367 "compare_and_write": false, 00:28:42.367 "abort": false, 00:28:42.367 "seek_hole": true, 00:28:42.367 "seek_data": true, 00:28:42.367 "copy": false, 00:28:42.367 "nvme_iov_md": false 00:28:42.367 }, 00:28:42.367 "driver_specific": { 00:28:42.367 "lvol": { 00:28:42.367 "lvol_store_uuid": "f970c4b1-f33b-4051-a481-b0f7154c382e", 00:28:42.367 "base_bdev": "basen1", 00:28:42.367 "thin_provision": true, 00:28:42.367 "num_allocated_clusters": 0, 00:28:42.367 "snapshot": false, 00:28:42.367 "clone": false, 00:28:42.367 "esnap_clone": false 00:28:42.367 } 00:28:42.367 } 00:28:42.367 } 00:28:42.367 ]' 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:42.367 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:42.629 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:42.629 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:42.629 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:42.890 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:42.890 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:42.890 11:24:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 87ed9dca-e14f-4ce2-ae01-a66d6b49403e -c cachen1p0 --l2p_dram_limit 2 00:28:43.152 [2024-11-27 11:24:11.857527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.857605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:43.152 [2024-11-27 11:24:11.857622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:43.152 [2024-11-27 11:24:11.857634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.857701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.857715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:43.152 [2024-11-27 11:24:11.857724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:43.152 [2024-11-27 11:24:11.857738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.857762] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:43.152 [2024-11-27 11:24:11.858102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:43.152 [2024-11-27 11:24:11.858181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.858192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:43.152 [2024-11-27 11:24:11.858207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.425 ms 00:28:43.152 [2024-11-27 11:24:11.858218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.858630] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a1a157f8-afb4-4c99-8e2c-a88e073d37a5 00:28:43.152 [2024-11-27 11:24:11.860435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.860485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:43.152 [2024-11-27 11:24:11.860501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:43.152 [2024-11-27 11:24:11.860514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.869343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.869385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:43.152 [2024-11-27 11:24:11.869404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.741 ms 00:28:43.152 [2024-11-27 11:24:11.869413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.869464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.869473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:43.152 [2024-11-27 11:24:11.869484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:43.152 [2024-11-27 11:24:11.869497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.869569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.869583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:43.152 [2024-11-27 11:24:11.869595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:43.152 [2024-11-27 11:24:11.869603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.869632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:43.152 [2024-11-27 11:24:11.871876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.871936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:43.152 [2024-11-27 11:24:11.871949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.256 ms 00:28:43.152 [2024-11-27 11:24:11.871959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.871989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.152 [2024-11-27 11:24:11.872000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:43.152 [2024-11-27 11:24:11.872009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:43.152 [2024-11-27 11:24:11.872021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.152 [2024-11-27 11:24:11.872039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:43.152 [2024-11-27 11:24:11.872195] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:43.152 [2024-11-27 11:24:11.872208] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:43.152 [2024-11-27 11:24:11.872222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:43.152 [2024-11-27 11:24:11.872233] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:43.152 [2024-11-27 11:24:11.872252] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:43.152 [2024-11-27 11:24:11.872261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:43.152 [2024-11-27 11:24:11.872280] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:43.152 [2024-11-27 11:24:11.872291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:43.152 [2024-11-27 11:24:11.872300] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:43.153 [2024-11-27 11:24:11.872311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.153 [2024-11-27 11:24:11.872321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:43.153 [2024-11-27 11:24:11.872329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:28:43.153 [2024-11-27 11:24:11.872339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.153 [2024-11-27 11:24:11.872426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.153 [2024-11-27 11:24:11.872458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:43.153 [2024-11-27 11:24:11.872467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:28:43.153 [2024-11-27 11:24:11.872476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.153 [2024-11-27 11:24:11.872576] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:43.153 [2024-11-27 11:24:11.872598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:43.153 [2024-11-27 11:24:11.872611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:43.153 [2024-11-27 11:24:11.872626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:43.153 [2024-11-27 11:24:11.872646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:43.153 [2024-11-27 11:24:11.872665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:43.153 [2024-11-27 11:24:11.872672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:43.153 [2024-11-27 11:24:11.872682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:43.153 [2024-11-27 11:24:11.872701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:43.153 [2024-11-27 11:24:11.872708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:43.153 [2024-11-27 11:24:11.872728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:43.153 [2024-11-27 11:24:11.872737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:43.153 [2024-11-27 11:24:11.872756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:43.153 [2024-11-27 11:24:11.872764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:43.153 [2024-11-27 11:24:11.872785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:43.153 [2024-11-27 11:24:11.872795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.153 [2024-11-27 11:24:11.872803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:43.153 [2024-11-27 11:24:11.872813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:43.153 [2024-11-27 11:24:11.872822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.153 [2024-11-27 11:24:11.872831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:43.153 [2024-11-27 11:24:11.872839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:43.153 [2024-11-27 11:24:11.872849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.153 [2024-11-27 11:24:11.872856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:43.153 [2024-11-27 11:24:11.872870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:43.153 [2024-11-27 11:24:11.872878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.153 [2024-11-27 11:24:11.872904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:43.153 [2024-11-27 11:24:11.872912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:43.153 [2024-11-27 11:24:11.872922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:43.153 [2024-11-27 11:24:11.872940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:43.153 [2024-11-27 11:24:11.872948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:43.153 [2024-11-27 11:24:11.872965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.872983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:43.153 [2024-11-27 11:24:11.872993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:43.153 [2024-11-27 11:24:11.873000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.873022] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:43.153 [2024-11-27 11:24:11.873031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:43.153 [2024-11-27 11:24:11.873048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:43.153 [2024-11-27 11:24:11.873057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.153 [2024-11-27 11:24:11.873067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:43.153 [2024-11-27 11:24:11.873074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:43.153 [2024-11-27 11:24:11.873082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:43.153 [2024-11-27 11:24:11.873089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:43.153 [2024-11-27 11:24:11.873100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:43.153 [2024-11-27 11:24:11.873107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:43.153 [2024-11-27 11:24:11.873122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:43.153 [2024-11-27 11:24:11.873132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:43.153 [2024-11-27 11:24:11.873152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:43.153 [2024-11-27 11:24:11.873177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:43.153 [2024-11-27 11:24:11.873185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:43.153 [2024-11-27 11:24:11.873196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:43.153 [2024-11-27 11:24:11.873203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:43.153 [2024-11-27 11:24:11.873262] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:43.153 [2024-11-27 11:24:11.873274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:43.153 [2024-11-27 11:24:11.873292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:43.153 [2024-11-27 11:24:11.873301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:43.153 [2024-11-27 11:24:11.873310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:43.153 [2024-11-27 11:24:11.873320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.153 [2024-11-27 11:24:11.873329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:43.153 [2024-11-27 11:24:11.873343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 00:28:43.153 [2024-11-27 11:24:11.873350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.153 [2024-11-27 11:24:11.873392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:43.153 [2024-11-27 11:24:11.873401] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:46.452 [2024-11-27 11:24:15.146498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.146593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:46.452 [2024-11-27 11:24:15.146617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3273.082 ms 00:28:46.452 [2024-11-27 11:24:15.146627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.160150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.160208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:46.452 [2024-11-27 11:24:15.160225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.397 ms 00:28:46.452 [2024-11-27 11:24:15.160234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.160291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.160300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:46.452 [2024-11-27 11:24:15.160315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:46.452 [2024-11-27 11:24:15.160323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.171639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.171690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:46.452 [2024-11-27 11:24:15.171711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.261 ms 00:28:46.452 [2024-11-27 11:24:15.171719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.171755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.171766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:46.452 [2024-11-27 11:24:15.171777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:46.452 [2024-11-27 11:24:15.171784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.172374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.172420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:46.452 [2024-11-27 11:24:15.172437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:28:46.452 [2024-11-27 11:24:15.172446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.172502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.172510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:46.452 [2024-11-27 11:24:15.172524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:46.452 [2024-11-27 11:24:15.172532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.195804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.195935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:46.452 [2024-11-27 11:24:15.195975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.235 ms 00:28:46.452 [2024-11-27 11:24:15.195997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.206409] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:46.452 [2024-11-27 11:24:15.207861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.207930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:46.452 [2024-11-27 11:24:15.207948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.617 ms 00:28:46.452 [2024-11-27 11:24:15.207960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.224879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.224961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:46.452 [2024-11-27 11:24:15.224974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.886 ms 00:28:46.452 [2024-11-27 11:24:15.224988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.225087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.225101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:46.452 [2024-11-27 11:24:15.225110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:46.452 [2024-11-27 11:24:15.225120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.229875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.229948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:46.452 [2024-11-27 11:24:15.229960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.717 ms 00:28:46.452 [2024-11-27 11:24:15.229971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.235095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.235152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:46.452 [2024-11-27 11:24:15.235162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.093 ms 00:28:46.452 [2024-11-27 11:24:15.235172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.452 [2024-11-27 11:24:15.235485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.452 [2024-11-27 11:24:15.235503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:46.452 [2024-11-27 11:24:15.235517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:28:46.452 [2024-11-27 11:24:15.235542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.274679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.453 [2024-11-27 11:24:15.274743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:46.453 [2024-11-27 11:24:15.274757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.111 ms 00:28:46.453 [2024-11-27 11:24:15.274768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.281800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.453 [2024-11-27 11:24:15.281864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:46.453 [2024-11-27 11:24:15.281881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.947 ms 00:28:46.453 [2024-11-27 11:24:15.281910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.287636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.453 [2024-11-27 11:24:15.287693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:46.453 [2024-11-27 11:24:15.287704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.675 ms 00:28:46.453 [2024-11-27 11:24:15.287714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.293234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.453 [2024-11-27 11:24:15.293292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:46.453 [2024-11-27 11:24:15.293303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.472 ms 00:28:46.453 [2024-11-27 11:24:15.293317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.293371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.453 [2024-11-27 11:24:15.293384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:46.453 [2024-11-27 11:24:15.293393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:46.453 [2024-11-27 11:24:15.293404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.293495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.453 [2024-11-27 11:24:15.293509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:46.453 [2024-11-27 11:24:15.293518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:46.453 [2024-11-27 11:24:15.293528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.453 [2024-11-27 11:24:15.294814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3436.821 ms, result 0 00:28:46.453 { 00:28:46.453 "name": "ftl", 00:28:46.453 "uuid": "a1a157f8-afb4-4c99-8e2c-a88e073d37a5" 00:28:46.453 } 00:28:46.453 11:24:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:46.714 [2024-11-27 11:24:15.517810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.714 11:24:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:46.975 11:24:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:47.238 [2024-11-27 11:24:15.934214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:47.238 11:24:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:47.499 [2024-11-27 11:24:16.138655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:47.499 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:47.761 Fill FTL, iteration 1 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92604 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92604 /var/tmp/spdk.tgt.sock 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92604 ']' 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.761 11:24:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:47.761 [2024-11-27 11:24:16.552402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:47.761 [2024-11-27 11:24:16.552495] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92604 ] 00:28:48.022 [2024-11-27 11:24:16.695072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.022 [2024-11-27 11:24:16.734561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.616 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.616 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:48.616 11:24:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:48.875 ftln1 00:28:48.875 11:24:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:48.875 11:24:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92604 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92604 ']' 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92604 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92604 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:49.135 killing process with pid 92604 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92604' 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92604 00:28:49.135 11:24:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92604 00:28:49.395 11:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:49.395 11:24:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:49.654 [2024-11-27 11:24:18.326430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:49.654 [2024-11-27 11:24:18.326538] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92639 ] 00:28:49.654 [2024-11-27 11:24:18.470378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.914 [2024-11-27 11:24:18.538399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.859  [2024-11-27T11:24:21.129Z] Copying: 178/1024 [MB] (178 MBps) [2024-11-27T11:24:22.069Z] Copying: 402/1024 [MB] (224 MBps) [2024-11-27T11:24:23.011Z] Copying: 632/1024 [MB] (230 MBps) [2024-11-27T11:24:23.582Z] Copying: 867/1024 [MB] (235 MBps) [2024-11-27T11:24:23.846Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:28:54.963 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:54.963 Calculate MD5 checksum, iteration 1 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:54.963 11:24:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:54.963 [2024-11-27 11:24:23.662956] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:54.963 [2024-11-27 11:24:23.663082] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92693 ] 00:28:54.963 [2024-11-27 11:24:23.811083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.225 [2024-11-27 11:24:23.870281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.610  [2024-11-27T11:24:25.752Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-27T11:24:26.056Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:28:57.173 00:28:57.173 11:24:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:57.173 11:24:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:59.714 Fill FTL, iteration 2 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e0b2dc760c6705a2c94443a3ceb8a3dc 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:59.714 11:24:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:59.714 [2024-11-27 11:24:28.146046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:59.714 [2024-11-27 11:24:28.146160] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92751 ] 00:28:59.714 [2024-11-27 11:24:28.291710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.714 [2024-11-27 11:24:28.331932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.656  [2024-11-27T11:24:30.926Z] Copying: 239/1024 [MB] (239 MBps) [2024-11-27T11:24:31.869Z] Copying: 471/1024 [MB] (232 MBps) [2024-11-27T11:24:32.811Z] Copying: 705/1024 [MB] (234 MBps) [2024-11-27T11:24:33.071Z] Copying: 933/1024 [MB] (228 MBps) [2024-11-27T11:24:33.331Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:29:04.448 00:29:04.448 Calculate MD5 checksum, iteration 2 00:29:04.448 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:04.449 11:24:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:04.449 [2024-11-27 11:24:33.161549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:04.449 [2024-11-27 11:24:33.161680] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92799 ] 00:29:04.449 [2024-11-27 11:24:33.308413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.708 [2024-11-27 11:24:33.349752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.090  [2024-11-27T11:24:35.546Z] Copying: 635/1024 [MB] (635 MBps) [2024-11-27T11:24:39.747Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:29:10.864 00:29:10.864 11:24:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:10.864 11:24:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:12.765 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:12.765 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b5a661e01cdcbac23a5449d6bd604a52 00:29:12.765 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:12.765 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:12.766 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:12.766 [2024-11-27 11:24:41.570860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.766 [2024-11-27 11:24:41.570905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:12.766 [2024-11-27 11:24:41.570917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:12.766 [2024-11-27 11:24:41.570924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.766 [2024-11-27 11:24:41.570941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.766 [2024-11-27 11:24:41.570948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:12.766 [2024-11-27 11:24:41.570963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:12.766 [2024-11-27 11:24:41.570969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.766 [2024-11-27 11:24:41.570984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.766 [2024-11-27 11:24:41.570991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:12.766 [2024-11-27 11:24:41.570998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:12.766 [2024-11-27 11:24:41.571003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.766 [2024-11-27 11:24:41.571054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.180 ms, result 0 00:29:12.766 true 00:29:12.766 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:13.024 { 00:29:13.024 "name": "ftl", 00:29:13.024 "properties": [ 00:29:13.024 { 00:29:13.024 "name": "superblock_version", 00:29:13.024 "value": 5, 00:29:13.024 "read-only": true 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "name": "base_device", 00:29:13.024 "bands": [ 00:29:13.024 { 00:29:13.024 "id": 0, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 1, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 2, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 3, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 4, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 5, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 6, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 7, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 8, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 9, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 10, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 11, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 12, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 13, 00:29:13.024 "state": "FREE", 00:29:13.024 "validity": 0.0 00:29:13.024 }, 00:29:13.024 { 00:29:13.024 "id": 14, 00:29:13.024 "state": "FREE", 00:29:13.025 "validity": 0.0 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 15, 00:29:13.025 "state": "FREE", 00:29:13.025 "validity": 0.0 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 16, 00:29:13.025 "state": "FREE", 00:29:13.025 "validity": 0.0 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 17, 00:29:13.025 "state": "FREE", 00:29:13.025 "validity": 0.0 00:29:13.025 } 00:29:13.025 ], 00:29:13.025 "read-only": true 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "name": "cache_device", 00:29:13.025 "type": "bdev", 00:29:13.025 "chunks": [ 00:29:13.025 { 00:29:13.025 "id": 0, 00:29:13.025 "state": "INACTIVE", 00:29:13.025 "utilization": 0.0 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 1, 00:29:13.025 "state": "CLOSED", 00:29:13.025 "utilization": 1.0 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 2, 00:29:13.025 "state": "CLOSED", 00:29:13.025 "utilization": 1.0 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 3, 00:29:13.025 "state": "OPEN", 00:29:13.025 "utilization": 0.001953125 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "id": 4, 00:29:13.025 "state": "OPEN", 00:29:13.025 "utilization": 0.0 00:29:13.025 } 00:29:13.025 ], 00:29:13.025 "read-only": true 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "name": "verbose_mode", 00:29:13.025 "value": true, 00:29:13.025 "unit": "", 00:29:13.025 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:13.025 }, 00:29:13.025 { 00:29:13.025 "name": "prep_upgrade_on_shutdown", 00:29:13.025 "value": false, 00:29:13.025 "unit": "", 00:29:13.025 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:13.025 } 00:29:13.025 ] 00:29:13.025 } 00:29:13.025 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:13.283 [2024-11-27 11:24:41.983223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.283 [2024-11-27 11:24:41.983256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:13.283 [2024-11-27 11:24:41.983266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:13.283 [2024-11-27 11:24:41.983272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.283 [2024-11-27 11:24:41.983288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.283 [2024-11-27 11:24:41.983294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:13.283 [2024-11-27 11:24:41.983300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.283 [2024-11-27 11:24:41.983306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.283 [2024-11-27 11:24:41.983320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.283 [2024-11-27 11:24:41.983326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:13.283 [2024-11-27 11:24:41.983332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.283 [2024-11-27 11:24:41.983338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.283 [2024-11-27 11:24:41.983379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.151 ms, result 0 00:29:13.283 true 00:29:13.283 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:13.283 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:13.283 11:24:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:13.542 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:13.542 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:13.542 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:13.542 [2024-11-27 11:24:42.403559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.542 [2024-11-27 11:24:42.403589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:13.542 [2024-11-27 11:24:42.403598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:13.542 [2024-11-27 11:24:42.403603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.542 [2024-11-27 11:24:42.403619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.542 [2024-11-27 11:24:42.403625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:13.542 [2024-11-27 11:24:42.403631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.542 [2024-11-27 11:24:42.403636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.542 [2024-11-27 11:24:42.403651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.542 [2024-11-27 11:24:42.403657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:13.542 [2024-11-27 11:24:42.403663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.542 [2024-11-27 11:24:42.403669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.542 [2024-11-27 11:24:42.403711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.143 ms, result 0 00:29:13.542 true 00:29:13.542 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:13.801 { 00:29:13.801 "name": "ftl", 00:29:13.801 "properties": [ 00:29:13.801 { 00:29:13.801 "name": "superblock_version", 00:29:13.801 "value": 5, 00:29:13.801 "read-only": true 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "name": "base_device", 00:29:13.801 "bands": [ 00:29:13.801 { 00:29:13.801 "id": 0, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 1, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 2, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 3, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 4, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 5, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 6, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 7, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 8, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 9, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 10, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 11, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 12, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 13, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 14, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 15, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 16, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 17, 00:29:13.801 "state": "FREE", 00:29:13.801 "validity": 0.0 00:29:13.801 } 00:29:13.801 ], 00:29:13.801 "read-only": true 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "name": "cache_device", 00:29:13.801 "type": "bdev", 00:29:13.801 "chunks": [ 00:29:13.801 { 00:29:13.801 "id": 0, 00:29:13.801 "state": "INACTIVE", 00:29:13.801 "utilization": 0.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 1, 00:29:13.801 "state": "CLOSED", 00:29:13.801 "utilization": 1.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 2, 00:29:13.801 "state": "CLOSED", 00:29:13.801 "utilization": 1.0 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 3, 00:29:13.801 "state": "OPEN", 00:29:13.801 "utilization": 0.001953125 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "id": 4, 00:29:13.801 "state": "OPEN", 00:29:13.801 "utilization": 0.0 00:29:13.801 } 00:29:13.801 ], 00:29:13.801 "read-only": true 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "name": "verbose_mode", 00:29:13.801 "value": true, 00:29:13.801 "unit": "", 00:29:13.801 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:13.801 }, 00:29:13.801 { 00:29:13.801 "name": "prep_upgrade_on_shutdown", 00:29:13.801 "value": true, 00:29:13.801 "unit": "", 00:29:13.801 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:13.801 } 00:29:13.801 ] 00:29:13.801 } 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92481 ]] 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92481 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92481 ']' 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92481 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92481 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92481' 00:29:13.801 killing process with pid 92481 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92481 00:29:13.801 11:24:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92481 00:29:14.060 [2024-11-27 11:24:42.683125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:14.060 [2024-11-27 11:24:42.689177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.060 [2024-11-27 11:24:42.689209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:14.060 [2024-11-27 11:24:42.689218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:14.060 [2024-11-27 11:24:42.689228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.060 [2024-11-27 11:24:42.689247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:14.060 [2024-11-27 11:24:42.689621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.060 [2024-11-27 11:24:42.689645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:14.060 [2024-11-27 11:24:42.689653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:29:14.060 [2024-11-27 11:24:42.689659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.113783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.113831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:22.185 [2024-11-27 11:24:50.113844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7424.072 ms 00:29:22.185 [2024-11-27 11:24:50.113856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.114999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.115022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:22.185 [2024-11-27 11:24:50.115030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.129 ms 00:29:22.185 [2024-11-27 11:24:50.115036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.115903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.115928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:22.185 [2024-11-27 11:24:50.115935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:29:22.185 [2024-11-27 11:24:50.115943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.118127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.118156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:22.185 [2024-11-27 11:24:50.118164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.155 ms 00:29:22.185 [2024-11-27 11:24:50.118170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.120734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.120763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:22.185 [2024-11-27 11:24:50.120770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.538 ms 00:29:22.185 [2024-11-27 11:24:50.120776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.120820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.120827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:22.185 [2024-11-27 11:24:50.120834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:22.185 [2024-11-27 11:24:50.120844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.122316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.122341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:22.185 [2024-11-27 11:24:50.122349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.459 ms 00:29:22.185 [2024-11-27 11:24:50.122355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.123949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.123973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:22.185 [2024-11-27 11:24:50.123980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.559 ms 00:29:22.185 [2024-11-27 11:24:50.123986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.125416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.125443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:22.185 [2024-11-27 11:24:50.125450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.395 ms 00:29:22.185 [2024-11-27 11:24:50.125456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.126934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.185 [2024-11-27 11:24:50.126958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:22.185 [2024-11-27 11:24:50.126965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.431 ms 00:29:22.185 [2024-11-27 11:24:50.126970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.185 [2024-11-27 11:24:50.126993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:22.185 [2024-11-27 11:24:50.127009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:22.185 [2024-11-27 11:24:50.127018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:22.185 [2024-11-27 11:24:50.127024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:22.185 [2024-11-27 11:24:50.127031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:22.185 [2024-11-27 11:24:50.127037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:22.185 [2024-11-27 11:24:50.127043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:22.186 [2024-11-27 11:24:50.127120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:22.186 [2024-11-27 11:24:50.127126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a1a157f8-afb4-4c99-8e2c-a88e073d37a5 00:29:22.186 [2024-11-27 11:24:50.127132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:22.186 [2024-11-27 11:24:50.127137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:22.186 [2024-11-27 11:24:50.127142] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:22.186 [2024-11-27 11:24:50.127148] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:22.186 [2024-11-27 11:24:50.127154] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:22.186 [2024-11-27 11:24:50.127159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:22.186 [2024-11-27 11:24:50.127167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:22.186 [2024-11-27 11:24:50.127172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:22.186 [2024-11-27 11:24:50.127177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:22.186 [2024-11-27 11:24:50.127184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.186 [2024-11-27 11:24:50.127190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:22.186 [2024-11-27 11:24:50.127196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:29:22.186 [2024-11-27 11:24:50.127202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.128486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.186 [2024-11-27 11:24:50.128506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:22.186 [2024-11-27 11:24:50.128514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.264 ms 00:29:22.186 [2024-11-27 11:24:50.128524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.128592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.186 [2024-11-27 11:24:50.128599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:22.186 [2024-11-27 11:24:50.128605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:22.186 [2024-11-27 11:24:50.128611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.133116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.133142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:22.186 [2024-11-27 11:24:50.133149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.133159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.133179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.133185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:22.186 [2024-11-27 11:24:50.133191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.133201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.133248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.133256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:22.186 [2024-11-27 11:24:50.133262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.133268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.133283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.133290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:22.186 [2024-11-27 11:24:50.133297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.133303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.141215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.141247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:22.186 [2024-11-27 11:24:50.141255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.141265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.147698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.147728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:22.186 [2024-11-27 11:24:50.147736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.147742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.147788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.147803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:22.186 [2024-11-27 11:24:50.147809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.147815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.147838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.147847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:22.186 [2024-11-27 11:24:50.147854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.147860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.147922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.147932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:22.186 [2024-11-27 11:24:50.147938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.147944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.147966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.147975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:22.186 [2024-11-27 11:24:50.147981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.147987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.148019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.148027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:22.186 [2024-11-27 11:24:50.148033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.148038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.148072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:22.186 [2024-11-27 11:24:50.148083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:22.186 [2024-11-27 11:24:50.148089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:22.186 [2024-11-27 11:24:50.148095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.186 [2024-11-27 11:24:50.148184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7458.961 ms, result 0 00:29:22.760 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93017 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93017 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93017 ']' 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.761 11:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:23.020 [2024-11-27 11:24:51.694017] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:23.020 [2024-11-27 11:24:51.694142] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93017 ] 00:29:23.020 [2024-11-27 11:24:51.842316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.020 [2024-11-27 11:24:51.882924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.279 [2024-11-27 11:24:52.134477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:23.279 [2024-11-27 11:24:52.134530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:23.539 [2024-11-27 11:24:52.272197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.539 [2024-11-27 11:24:52.272232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:23.539 [2024-11-27 11:24:52.272242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:23.539 [2024-11-27 11:24:52.272252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.539 [2024-11-27 11:24:52.272293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.539 [2024-11-27 11:24:52.272303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:23.540 [2024-11-27 11:24:52.272309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:23.540 [2024-11-27 11:24:52.272317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.272335] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:23.540 [2024-11-27 11:24:52.272504] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:23.540 [2024-11-27 11:24:52.272516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.272522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:23.540 [2024-11-27 11:24:52.272534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:29:23.540 [2024-11-27 11:24:52.272539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.273490] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:23.540 [2024-11-27 11:24:52.275942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.275971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:23.540 [2024-11-27 11:24:52.275979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.452 ms 00:29:23.540 [2024-11-27 11:24:52.275989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.276033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.276040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:23.540 [2024-11-27 11:24:52.276047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:23.540 [2024-11-27 11:24:52.276052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.280478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.280504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:23.540 [2024-11-27 11:24:52.280517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.383 ms 00:29:23.540 [2024-11-27 11:24:52.280523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.280552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.280559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:23.540 [2024-11-27 11:24:52.280565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:23.540 [2024-11-27 11:24:52.280570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.280604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.280611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:23.540 [2024-11-27 11:24:52.280623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:23.540 [2024-11-27 11:24:52.280630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.280645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:23.540 [2024-11-27 11:24:52.281819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.281848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:23.540 [2024-11-27 11:24:52.281855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.177 ms 00:29:23.540 [2024-11-27 11:24:52.281860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.281882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.281909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:23.540 [2024-11-27 11:24:52.281915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:23.540 [2024-11-27 11:24:52.281925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.281940] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:23.540 [2024-11-27 11:24:52.281954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:23.540 [2024-11-27 11:24:52.281981] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:23.540 [2024-11-27 11:24:52.281994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:23.540 [2024-11-27 11:24:52.282073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:23.540 [2024-11-27 11:24:52.282082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:23.540 [2024-11-27 11:24:52.282091] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:23.540 [2024-11-27 11:24:52.282101] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282108] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:23.540 [2024-11-27 11:24:52.282120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:23.540 [2024-11-27 11:24:52.282125] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:23.540 [2024-11-27 11:24:52.282131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:23.540 [2024-11-27 11:24:52.282137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.282142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:23.540 [2024-11-27 11:24:52.282148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:29:23.540 [2024-11-27 11:24:52.282153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.282219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.540 [2024-11-27 11:24:52.282225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:23.540 [2024-11-27 11:24:52.282231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:23.540 [2024-11-27 11:24:52.282236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.540 [2024-11-27 11:24:52.282313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:23.540 [2024-11-27 11:24:52.282321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:23.540 [2024-11-27 11:24:52.282329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:23.540 [2024-11-27 11:24:52.282346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:23.540 [2024-11-27 11:24:52.282357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:23.540 [2024-11-27 11:24:52.282362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:23.540 [2024-11-27 11:24:52.282367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:23.540 [2024-11-27 11:24:52.282379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:23.540 [2024-11-27 11:24:52.282384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:23.540 [2024-11-27 11:24:52.282399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:23.540 [2024-11-27 11:24:52.282406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:23.540 [2024-11-27 11:24:52.282416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:23.540 [2024-11-27 11:24:52.282425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:23.540 [2024-11-27 11:24:52.282436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:23.540 [2024-11-27 11:24:52.282441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:23.540 [2024-11-27 11:24:52.282451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:23.540 [2024-11-27 11:24:52.282456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:23.540 [2024-11-27 11:24:52.282466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:23.540 [2024-11-27 11:24:52.282472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:23.540 [2024-11-27 11:24:52.282483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:23.540 [2024-11-27 11:24:52.282488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:23.540 [2024-11-27 11:24:52.282499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:23.540 [2024-11-27 11:24:52.282505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:23.540 [2024-11-27 11:24:52.282518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:23.540 [2024-11-27 11:24:52.282523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:23.540 [2024-11-27 11:24:52.282535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:23.540 [2024-11-27 11:24:52.282552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:23.540 [2024-11-27 11:24:52.282557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.540 [2024-11-27 11:24:52.282564] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:23.540 [2024-11-27 11:24:52.282572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:23.540 [2024-11-27 11:24:52.282579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:23.541 [2024-11-27 11:24:52.282585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.541 [2024-11-27 11:24:52.282591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:23.541 [2024-11-27 11:24:52.282597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:23.541 [2024-11-27 11:24:52.282603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:23.541 [2024-11-27 11:24:52.282610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:23.541 [2024-11-27 11:24:52.282616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:23.541 [2024-11-27 11:24:52.282621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:23.541 [2024-11-27 11:24:52.282628] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:23.541 [2024-11-27 11:24:52.282636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:23.541 [2024-11-27 11:24:52.282648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:23.541 [2024-11-27 11:24:52.282667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:23.541 [2024-11-27 11:24:52.282673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:23.541 [2024-11-27 11:24:52.282679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:23.541 [2024-11-27 11:24:52.282684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:23.541 [2024-11-27 11:24:52.282730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:23.541 [2024-11-27 11:24:52.282737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:23.541 [2024-11-27 11:24:52.282751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:23.541 [2024-11-27 11:24:52.282757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:23.541 [2024-11-27 11:24:52.282770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:23.541 [2024-11-27 11:24:52.282777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.541 [2024-11-27 11:24:52.282784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:23.541 [2024-11-27 11:24:52.282790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:29:23.541 [2024-11-27 11:24:52.282798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.541 [2024-11-27 11:24:52.282828] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:23.541 [2024-11-27 11:24:52.282835] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:27.749 [2024-11-27 11:24:56.380604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.380685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:27.749 [2024-11-27 11:24:56.380703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4097.756 ms 00:29:27.749 [2024-11-27 11:24:56.380713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-27 11:24:56.394789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.394849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:27.749 [2024-11-27 11:24:56.394865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.943 ms 00:29:27.749 [2024-11-27 11:24:56.394875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-27 11:24:56.394970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.394983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:27.749 [2024-11-27 11:24:56.394993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:27.749 [2024-11-27 11:24:56.395011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-27 11:24:56.416566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.416653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:27.749 [2024-11-27 11:24:56.416677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.502 ms 00:29:27.749 [2024-11-27 11:24:56.416693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-27 11:24:56.416779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.416799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:27.749 [2024-11-27 11:24:56.416819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:27.749 [2024-11-27 11:24:56.416834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-27 11:24:56.417624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.417683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:27.749 [2024-11-27 11:24:56.417702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:29:27.749 [2024-11-27 11:24:56.417717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-27 11:24:56.417816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-27 11:24:56.417838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:27.749 [2024-11-27 11:24:56.417854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:27.749 [2024-11-27 11:24:56.417869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.428211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.428280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:27.750 [2024-11-27 11:24:56.428300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.274 ms 00:29:27.750 [2024-11-27 11:24:56.428315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.432557] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:27.750 [2024-11-27 11:24:56.432613] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:27.750 [2024-11-27 11:24:56.432627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.432637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:27.750 [2024-11-27 11:24:56.432647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.116 ms 00:29:27.750 [2024-11-27 11:24:56.432655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.437623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.437671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:27.750 [2024-11-27 11:24:56.437691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.910 ms 00:29:27.750 [2024-11-27 11:24:56.437700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.440339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.440388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:27.750 [2024-11-27 11:24:56.440399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.582 ms 00:29:27.750 [2024-11-27 11:24:56.440407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.443030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.443076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:27.750 [2024-11-27 11:24:56.443086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.574 ms 00:29:27.750 [2024-11-27 11:24:56.443095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.443446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.443469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:27.750 [2024-11-27 11:24:56.443482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:29:27.750 [2024-11-27 11:24:56.443491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.467311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.467373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:27.750 [2024-11-27 11:24:56.467394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.799 ms 00:29:27.750 [2024-11-27 11:24:56.467403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.475524] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:27.750 [2024-11-27 11:24:56.476549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.476592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:27.750 [2024-11-27 11:24:56.476604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.088 ms 00:29:27.750 [2024-11-27 11:24:56.476620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.476737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.476750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:27.750 [2024-11-27 11:24:56.476760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:27.750 [2024-11-27 11:24:56.476769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.476819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.476835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:27.750 [2024-11-27 11:24:56.476843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:27.750 [2024-11-27 11:24:56.476852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.476878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.476906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:27.750 [2024-11-27 11:24:56.476919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:27.750 [2024-11-27 11:24:56.476928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.476965] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:27.750 [2024-11-27 11:24:56.476976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.476985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:27.750 [2024-11-27 11:24:56.476994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:27.750 [2024-11-27 11:24:56.477007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.482264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.482324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:27.750 [2024-11-27 11:24:56.482336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.208 ms 00:29:27.750 [2024-11-27 11:24:56.482344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.482440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.750 [2024-11-27 11:24:56.482452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:27.750 [2024-11-27 11:24:56.482462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:27.750 [2024-11-27 11:24:56.482472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.750 [2024-11-27 11:24:56.484104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4211.383 ms, result 0 00:29:27.750 [2024-11-27 11:24:56.497099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.750 [2024-11-27 11:24:56.513085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:27.750 [2024-11-27 11:24:56.521222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:27.750 11:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.750 11:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:27.750 11:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:27.750 11:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:27.750 11:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:28.011 [2024-11-27 11:24:56.749251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.011 [2024-11-27 11:24:56.749308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:28.011 [2024-11-27 11:24:56.749324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:28.011 [2024-11-27 11:24:56.749333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.011 [2024-11-27 11:24:56.749357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.011 [2024-11-27 11:24:56.749366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:28.011 [2024-11-27 11:24:56.749375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:28.011 [2024-11-27 11:24:56.749383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.011 [2024-11-27 11:24:56.749409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.011 [2024-11-27 11:24:56.749418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:28.011 [2024-11-27 11:24:56.749428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.011 [2024-11-27 11:24:56.749436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.011 [2024-11-27 11:24:56.749497] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.243 ms, result 0 00:29:28.011 true 00:29:28.011 11:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:28.272 { 00:29:28.272 "name": "ftl", 00:29:28.272 "properties": [ 00:29:28.272 { 00:29:28.272 "name": "superblock_version", 00:29:28.272 "value": 5, 00:29:28.272 "read-only": true 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "name": "base_device", 00:29:28.272 "bands": [ 00:29:28.272 { 00:29:28.272 "id": 0, 00:29:28.272 "state": "CLOSED", 00:29:28.272 "validity": 1.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 1, 00:29:28.272 "state": "CLOSED", 00:29:28.272 "validity": 1.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 2, 00:29:28.272 "state": "CLOSED", 00:29:28.272 "validity": 0.007843137254901933 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 3, 00:29:28.272 "state": "FREE", 00:29:28.272 "validity": 0.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 4, 00:29:28.272 "state": "FREE", 00:29:28.272 "validity": 0.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 5, 00:29:28.272 "state": "FREE", 00:29:28.272 "validity": 0.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 6, 00:29:28.272 "state": "FREE", 00:29:28.272 "validity": 0.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 7, 00:29:28.272 "state": "FREE", 00:29:28.272 "validity": 0.0 00:29:28.272 }, 00:29:28.272 { 00:29:28.272 "id": 8, 00:29:28.272 "state": "FREE", 00:29:28.272 "validity": 0.0 00:29:28.272 }, 00:29:28.273 { 00:29:28.273 "id": 9, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 10, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 11, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 12, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 13, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 14, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 15, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 16, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 17, 00:29:28.273 "state": "FREE", 00:29:28.273 "validity": 0.0 00:29:28.273 } 00:29:28.273 ], 00:29:28.273 "read-only": true 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "name": "cache_device", 00:29:28.273 "type": "bdev", 00:29:28.273 "chunks": [ 00:29:28.273 { 00:29:28.273 "id": 0, 00:29:28.273 "state": "INACTIVE", 00:29:28.273 "utilization": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 1, 00:29:28.273 "state": "OPEN", 00:29:28.273 "utilization": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 2, 00:29:28.273 "state": "OPEN", 00:29:28.273 "utilization": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 3, 00:29:28.273 "state": "FREE", 00:29:28.273 "utilization": 0.0 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "id": 4, 00:29:28.273 "state": "FREE", 00:29:28.273 "utilization": 0.0 00:29:28.273 } 00:29:28.273 ], 00:29:28.273 "read-only": true 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "name": "verbose_mode", 00:29:28.273 "value": true, 00:29:28.273 "unit": "", 00:29:28.273 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:28.273 }, 00:29:28.273 { 00:29:28.273 "name": "prep_upgrade_on_shutdown", 00:29:28.273 "value": false, 00:29:28.273 "unit": "", 00:29:28.273 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:28.273 } 00:29:28.273 ] 00:29:28.273 } 00:29:28.273 11:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:28.273 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:28.273 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:28.534 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:28.534 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:28.534 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:28.534 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:28.534 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:28.796 Validate MD5 checksum, iteration 1 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:28.796 11:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:28.796 [2024-11-27 11:24:57.513160] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:28.796 [2024-11-27 11:24:57.513736] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93087 ] 00:29:28.796 [2024-11-27 11:24:57.667186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.057 [2024-11-27 11:24:57.718933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.447  [2024-11-27T11:25:00.274Z] Copying: 531/1024 [MB] (531 MBps) [2024-11-27T11:25:00.846Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:29:31.963 00:29:31.963 11:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:31.963 11:25:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:34.564 Validate MD5 checksum, iteration 2 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e0b2dc760c6705a2c94443a3ceb8a3dc 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e0b2dc760c6705a2c94443a3ceb8a3dc != \e\0\b\2\d\c\7\6\0\c\6\7\0\5\a\2\c\9\4\4\4\3\a\3\c\e\b\8\a\3\d\c ]] 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.564 11:25:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.564 [2024-11-27 11:25:02.930549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:34.564 [2024-11-27 11:25:02.930784] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93154 ] 00:29:34.564 [2024-11-27 11:25:03.078022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.564 [2024-11-27 11:25:03.109137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.951  [2024-11-27T11:25:05.406Z] Copying: 641/1024 [MB] (641 MBps) [2024-11-27T11:25:05.978Z] Copying: 1024/1024 [MB] (average 585 MBps) 00:29:37.095 00:29:37.095 11:25:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:37.095 11:25:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b5a661e01cdcbac23a5449d6bd604a52 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b5a661e01cdcbac23a5449d6bd604a52 != \b\5\a\6\6\1\e\0\1\c\d\c\b\a\c\2\3\a\5\4\4\9\d\6\b\d\6\0\4\a\5\2 ]] 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93017 ]] 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93017 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:39.631 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:39.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93210 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93210 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93210 ']' 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:39.632 11:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:39.632 [2024-11-27 11:25:08.178981] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:39.632 [2024-11-27 11:25:08.179318] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93210 ] 00:29:39.632 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 93017 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:39.632 [2024-11-27 11:25:08.328296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.632 [2024-11-27 11:25:08.371455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.893 [2024-11-27 11:25:08.625871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:39.893 [2024-11-27 11:25:08.626098] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:39.893 [2024-11-27 11:25:08.767921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.893 [2024-11-27 11:25:08.768068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:39.893 [2024-11-27 11:25:08.768091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:39.893 [2024-11-27 11:25:08.768103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.893 [2024-11-27 11:25:08.768167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.893 [2024-11-27 11:25:08.768181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:39.893 [2024-11-27 11:25:08.768189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:39.893 [2024-11-27 11:25:08.768196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.893 [2024-11-27 11:25:08.768226] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:39.893 [2024-11-27 11:25:08.768453] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:39.893 [2024-11-27 11:25:08.768468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.893 [2024-11-27 11:25:08.768475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:39.893 [2024-11-27 11:25:08.768485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:29:39.893 [2024-11-27 11:25:08.768492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.893 [2024-11-27 11:25:08.768725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:39.893 [2024-11-27 11:25:08.772933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.893 [2024-11-27 11:25:08.772967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:39.893 [2024-11-27 11:25:08.772978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.209 ms 00:29:39.893 [2024-11-27 11:25:08.772990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.155 [2024-11-27 11:25:08.773903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.155 [2024-11-27 11:25:08.773931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:40.155 [2024-11-27 11:25:08.773941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:40.155 [2024-11-27 11:25:08.773949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.155 [2024-11-27 11:25:08.774211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.155 [2024-11-27 11:25:08.774223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:40.155 [2024-11-27 11:25:08.774237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:29:40.155 [2024-11-27 11:25:08.774246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.155 [2024-11-27 11:25:08.774281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.155 [2024-11-27 11:25:08.774290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:40.155 [2024-11-27 11:25:08.774298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:40.155 [2024-11-27 11:25:08.774306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.155 [2024-11-27 11:25:08.774333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.155 [2024-11-27 11:25:08.774342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:40.155 [2024-11-27 11:25:08.774354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:40.155 [2024-11-27 11:25:08.774364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.155 [2024-11-27 11:25:08.774386] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:40.155 [2024-11-27 11:25:08.775255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.155 [2024-11-27 11:25:08.775280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:40.155 [2024-11-27 11:25:08.775289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:29:40.155 [2024-11-27 11:25:08.775296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.155 [2024-11-27 11:25:08.775321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.155 [2024-11-27 11:25:08.775334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:40.155 [2024-11-27 11:25:08.775341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:40.156 [2024-11-27 11:25:08.775351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.156 [2024-11-27 11:25:08.775376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:40.156 [2024-11-27 11:25:08.775394] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:40.156 [2024-11-27 11:25:08.775430] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:40.156 [2024-11-27 11:25:08.775447] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:40.156 [2024-11-27 11:25:08.775547] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:40.156 [2024-11-27 11:25:08.775559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:40.156 [2024-11-27 11:25:08.775571] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:40.156 [2024-11-27 11:25:08.775584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:40.156 [2024-11-27 11:25:08.775592] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:40.156 [2024-11-27 11:25:08.775603] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:40.156 [2024-11-27 11:25:08.775610] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:40.156 [2024-11-27 11:25:08.775617] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:40.156 [2024-11-27 11:25:08.775624] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:40.156 [2024-11-27 11:25:08.775631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.156 [2024-11-27 11:25:08.775638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:40.156 [2024-11-27 11:25:08.775645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:29:40.156 [2024-11-27 11:25:08.775652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.156 [2024-11-27 11:25:08.775738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.156 [2024-11-27 11:25:08.775746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:40.156 [2024-11-27 11:25:08.775753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:40.156 [2024-11-27 11:25:08.775762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.156 [2024-11-27 11:25:08.775871] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:40.156 [2024-11-27 11:25:08.775881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:40.156 [2024-11-27 11:25:08.775900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:40.156 [2024-11-27 11:25:08.775909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.775917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:40.156 [2024-11-27 11:25:08.775925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.775933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:40.156 [2024-11-27 11:25:08.775941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:40.156 [2024-11-27 11:25:08.775950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:40.156 [2024-11-27 11:25:08.775957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.775967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:40.156 [2024-11-27 11:25:08.775974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:40.156 [2024-11-27 11:25:08.775987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.775995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:40.156 [2024-11-27 11:25:08.776003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:40.156 [2024-11-27 11:25:08.776014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:40.156 [2024-11-27 11:25:08.776029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:40.156 [2024-11-27 11:25:08.776037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:40.156 [2024-11-27 11:25:08.776052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:40.156 [2024-11-27 11:25:08.776060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:40.156 [2024-11-27 11:25:08.776074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:40.156 [2024-11-27 11:25:08.776082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:40.156 [2024-11-27 11:25:08.776097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:40.156 [2024-11-27 11:25:08.776104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:40.156 [2024-11-27 11:25:08.776119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:40.156 [2024-11-27 11:25:08.776127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:40.156 [2024-11-27 11:25:08.776146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:40.156 [2024-11-27 11:25:08.776153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:40.156 [2024-11-27 11:25:08.776168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:40.156 [2024-11-27 11:25:08.776191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:40.156 [2024-11-27 11:25:08.776213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:40.156 [2024-11-27 11:25:08.776221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776228] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:40.156 [2024-11-27 11:25:08.776242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:40.156 [2024-11-27 11:25:08.776250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.156 [2024-11-27 11:25:08.776269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:40.156 [2024-11-27 11:25:08.776277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:40.156 [2024-11-27 11:25:08.776285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:40.156 [2024-11-27 11:25:08.776292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:40.156 [2024-11-27 11:25:08.776300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:40.156 [2024-11-27 11:25:08.776307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:40.156 [2024-11-27 11:25:08.776316] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:40.156 [2024-11-27 11:25:08.776326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:40.156 [2024-11-27 11:25:08.776344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:40.156 [2024-11-27 11:25:08.776367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:40.156 [2024-11-27 11:25:08.776375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:40.156 [2024-11-27 11:25:08.776383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:40.156 [2024-11-27 11:25:08.776391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:40.156 [2024-11-27 11:25:08.776448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:40.156 [2024-11-27 11:25:08.776461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:40.156 [2024-11-27 11:25:08.776478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:40.156 [2024-11-27 11:25:08.776486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:40.156 [2024-11-27 11:25:08.776498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:40.156 [2024-11-27 11:25:08.776506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.156 [2024-11-27 11:25:08.776514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:40.157 [2024-11-27 11:25:08.776523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.702 ms 00:29:40.157 [2024-11-27 11:25:08.776531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.783507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.783611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:40.157 [2024-11-27 11:25:08.783659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.925 ms 00:29:40.157 [2024-11-27 11:25:08.783681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.783729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.783753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:40.157 [2024-11-27 11:25:08.783773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:40.157 [2024-11-27 11:25:08.783792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.800318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.800450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:40.157 [2024-11-27 11:25:08.800505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.470 ms 00:29:40.157 [2024-11-27 11:25:08.800528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.800604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.800641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:40.157 [2024-11-27 11:25:08.800673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:40.157 [2024-11-27 11:25:08.800755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.800971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.801084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:40.157 [2024-11-27 11:25:08.801121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:29:40.157 [2024-11-27 11:25:08.801201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.801292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.801405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:40.157 [2024-11-27 11:25:08.801484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:40.157 [2024-11-27 11:25:08.801518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.808267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.808408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:40.157 [2024-11-27 11:25:08.808476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.653 ms 00:29:40.157 [2024-11-27 11:25:08.808511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.808656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.808745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:40.157 [2024-11-27 11:25:08.808781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:40.157 [2024-11-27 11:25:08.808812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.814025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.814132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:40.157 [2024-11-27 11:25:08.814180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.118 ms 00:29:40.157 [2024-11-27 11:25:08.814202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.815421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.815519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:40.157 [2024-11-27 11:25:08.815575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:29:40.157 [2024-11-27 11:25:08.815596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.830999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.831158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:40.157 [2024-11-27 11:25:08.831209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.360 ms 00:29:40.157 [2024-11-27 11:25:08.831231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.831363] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:40.157 [2024-11-27 11:25:08.831470] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:40.157 [2024-11-27 11:25:08.831619] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:40.157 [2024-11-27 11:25:08.831720] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:40.157 [2024-11-27 11:25:08.831755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.831775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:40.157 [2024-11-27 11:25:08.831823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.472 ms 00:29:40.157 [2024-11-27 11:25:08.831850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.831913] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:40.157 [2024-11-27 11:25:08.831994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.832042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:40.157 [2024-11-27 11:25:08.832065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:29:40.157 [2024-11-27 11:25:08.832085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.835559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.835674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:40.157 [2024-11-27 11:25:08.835721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.439 ms 00:29:40.157 [2024-11-27 11:25:08.835743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.836386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.836494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:40.157 [2024-11-27 11:25:08.836542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:40.157 [2024-11-27 11:25:08.836564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.157 [2024-11-27 11:25:08.836646] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:40.157 [2024-11-27 11:25:08.836818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.157 [2024-11-27 11:25:08.836918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:40.157 [2024-11-27 11:25:08.836940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:29:40.157 [2024-11-27 11:25:08.836959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.739 [2024-11-27 11:25:09.545923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.739 [2024-11-27 11:25:09.546147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:40.739 [2024-11-27 11:25:09.546217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 708.579 ms 00:29:40.739 [2024-11-27 11:25:09.546242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.739 [2024-11-27 11:25:09.547846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.739 [2024-11-27 11:25:09.548021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:40.739 [2024-11-27 11:25:09.548092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:29:40.739 [2024-11-27 11:25:09.548117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.739 [2024-11-27 11:25:09.548692] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:40.739 [2024-11-27 11:25:09.548846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.739 [2024-11-27 11:25:09.548920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:40.739 [2024-11-27 11:25:09.548948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.670 ms 00:29:40.739 [2024-11-27 11:25:09.548970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.739 [2024-11-27 11:25:09.549018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.739 [2024-11-27 11:25:09.549056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:40.739 [2024-11-27 11:25:09.549128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:40.739 [2024-11-27 11:25:09.549159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.739 [2024-11-27 11:25:09.549223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 712.563 ms, result 0 00:29:40.739 [2024-11-27 11:25:09.549290] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:40.739 [2024-11-27 11:25:09.549471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.739 [2024-11-27 11:25:09.549568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:40.739 [2024-11-27 11:25:09.549597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:29:40.740 [2024-11-27 11:25:09.549617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.230055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.230356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:41.685 [2024-11-27 11:25:10.230433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 679.838 ms 00:29:41.685 [2024-11-27 11:25:10.230459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.232219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.232267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:41.685 [2024-11-27 11:25:10.232278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.222 ms 00:29:41.685 [2024-11-27 11:25:10.232286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.232748] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:41.685 [2024-11-27 11:25:10.232776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.232786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:41.685 [2024-11-27 11:25:10.232795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:29:41.685 [2024-11-27 11:25:10.232804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.232837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.232847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:41.685 [2024-11-27 11:25:10.232856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.685 [2024-11-27 11:25:10.232864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.232919] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 683.605 ms, result 0 00:29:41.685 [2024-11-27 11:25:10.232968] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:41.685 [2024-11-27 11:25:10.232979] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:41.685 [2024-11-27 11:25:10.232990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.232999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:41.685 [2024-11-27 11:25:10.233008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1396.359 ms 00:29:41.685 [2024-11-27 11:25:10.233018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.233067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.233080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:41.685 [2024-11-27 11:25:10.233089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:41.685 [2024-11-27 11:25:10.233097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.242361] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:41.685 [2024-11-27 11:25:10.242616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.242651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:41.685 [2024-11-27 11:25:10.242725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.493 ms 00:29:41.685 [2024-11-27 11:25:10.242748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.243556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.243674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:41.685 [2024-11-27 11:25:10.243736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:29:41.685 [2024-11-27 11:25:10.243763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.246039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.246154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:41.685 [2024-11-27 11:25:10.246213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.238 ms 00:29:41.685 [2024-11-27 11:25:10.246243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.246312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.246335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:41.685 [2024-11-27 11:25:10.246356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:41.685 [2024-11-27 11:25:10.246374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.246501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.246748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:41.685 [2024-11-27 11:25:10.246775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:41.685 [2024-11-27 11:25:10.246794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.246839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.246861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:41.685 [2024-11-27 11:25:10.246880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:41.685 [2024-11-27 11:25:10.246925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.246978] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:41.685 [2024-11-27 11:25:10.247013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.247033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:41.685 [2024-11-27 11:25:10.247110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:41.685 [2024-11-27 11:25:10.247133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.247226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.685 [2024-11-27 11:25:10.247255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:41.685 [2024-11-27 11:25:10.247276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:41.685 [2024-11-27 11:25:10.247295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.685 [2024-11-27 11:25:10.248386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1479.986 ms, result 0 00:29:41.685 [2024-11-27 11:25:10.263282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.685 [2024-11-27 11:25:10.279286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:41.685 [2024-11-27 11:25:10.287423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.947 Validate MD5 checksum, iteration 1 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:41.947 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:41.948 11:25:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:41.948 [2024-11-27 11:25:10.810980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:41.948 [2024-11-27 11:25:10.811319] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93243 ] 00:29:42.207 [2024-11-27 11:25:10.961742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.207 [2024-11-27 11:25:11.010155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.591  [2024-11-27T11:25:13.418Z] Copying: 552/1024 [MB] (552 MBps) [2024-11-27T11:25:13.988Z] Copying: 1024/1024 [MB] (average 528 MBps) 00:29:45.105 00:29:45.105 11:25:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:45.105 11:25:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:47.020 Validate MD5 checksum, iteration 2 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e0b2dc760c6705a2c94443a3ceb8a3dc 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e0b2dc760c6705a2c94443a3ceb8a3dc != \e\0\b\2\d\c\7\6\0\c\6\7\0\5\a\2\c\9\4\4\4\3\a\3\c\e\b\8\a\3\d\c ]] 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:47.020 11:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:47.020 [2024-11-27 11:25:15.778286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:47.020 [2024-11-27 11:25:15.780145] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93300 ] 00:29:47.280 [2024-11-27 11:25:15.929734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.280 [2024-11-27 11:25:15.975721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.668  [2024-11-27T11:25:18.493Z] Copying: 515/1024 [MB] (515 MBps) [2024-11-27T11:25:19.066Z] Copying: 1024/1024 [MB] (average 532 MBps) 00:29:50.183 00:29:50.183 11:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:50.183 11:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b5a661e01cdcbac23a5449d6bd604a52 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b5a661e01cdcbac23a5449d6bd604a52 != \b\5\a\6\6\1\e\0\1\c\d\c\b\a\c\2\3\a\5\4\4\9\d\6\b\d\6\0\4\a\5\2 ]] 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93210 ]] 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93210 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93210 ']' 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93210 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93210 00:29:52.095 killing process with pid 93210 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.095 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93210' 00:29:52.096 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93210 00:29:52.096 11:25:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93210 00:29:52.096 [2024-11-27 11:25:20.771778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:52.096 [2024-11-27 11:25:20.776196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.776224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:52.096 [2024-11-27 11:25:20.776234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:52.096 [2024-11-27 11:25:20.776240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.776257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:52.096 [2024-11-27 11:25:20.776627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.776648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:52.096 [2024-11-27 11:25:20.776655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:29:52.096 [2024-11-27 11:25:20.776661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.776841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.776855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:52.096 [2024-11-27 11:25:20.776861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.161 ms 00:29:52.096 [2024-11-27 11:25:20.776866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.777993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.778011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:52.096 [2024-11-27 11:25:20.778018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:29:52.096 [2024-11-27 11:25:20.778024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.778863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.778881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:52.096 [2024-11-27 11:25:20.778897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:29:52.096 [2024-11-27 11:25:20.778908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.780109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.780133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:52.096 [2024-11-27 11:25:20.780140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.176 ms 00:29:52.096 [2024-11-27 11:25:20.780146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.781412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.781438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:52.096 [2024-11-27 11:25:20.781445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.240 ms 00:29:52.096 [2024-11-27 11:25:20.781451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.781499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.781505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:52.096 [2024-11-27 11:25:20.781512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:52.096 [2024-11-27 11:25:20.781518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.782691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.782713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:52.096 [2024-11-27 11:25:20.782719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:29:52.096 [2024-11-27 11:25:20.782725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.783816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.783837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:52.096 [2024-11-27 11:25:20.783843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.067 ms 00:29:52.096 [2024-11-27 11:25:20.783848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.784768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.784790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:52.096 [2024-11-27 11:25:20.784797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.897 ms 00:29:52.096 [2024-11-27 11:25:20.784802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.785971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.785992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:52.096 [2024-11-27 11:25:20.785999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.124 ms 00:29:52.096 [2024-11-27 11:25:20.786004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.786028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:52.096 [2024-11-27 11:25:20.786039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:52.096 [2024-11-27 11:25:20.786050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:52.096 [2024-11-27 11:25:20.786056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:52.096 [2024-11-27 11:25:20.786063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:52.096 [2024-11-27 11:25:20.786154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:52.096 [2024-11-27 11:25:20.786160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a1a157f8-afb4-4c99-8e2c-a88e073d37a5 00:29:52.096 [2024-11-27 11:25:20.786166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:52.096 [2024-11-27 11:25:20.786171] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:52.096 [2024-11-27 11:25:20.786177] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:52.096 [2024-11-27 11:25:20.786183] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:52.096 [2024-11-27 11:25:20.786189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:52.096 [2024-11-27 11:25:20.786194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:52.096 [2024-11-27 11:25:20.786200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:52.096 [2024-11-27 11:25:20.786205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:52.096 [2024-11-27 11:25:20.786210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:52.096 [2024-11-27 11:25:20.786215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.786221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:52.096 [2024-11-27 11:25:20.786227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:29:52.096 [2024-11-27 11:25:20.786234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.787428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.787448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:52.096 [2024-11-27 11:25:20.787455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.182 ms 00:29:52.096 [2024-11-27 11:25:20.787461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.787527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.096 [2024-11-27 11:25:20.787533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:52.096 [2024-11-27 11:25:20.787546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:52.096 [2024-11-27 11:25:20.787552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.792025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.792046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:52.096 [2024-11-27 11:25:20.792054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.096 [2024-11-27 11:25:20.792059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.792080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.792092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:52.096 [2024-11-27 11:25:20.792100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.096 [2024-11-27 11:25:20.792105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.792158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.792166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:52.096 [2024-11-27 11:25:20.792172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.096 [2024-11-27 11:25:20.792177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.792190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.792196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:52.096 [2024-11-27 11:25:20.792202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.096 [2024-11-27 11:25:20.792210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.799574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.799608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:52.096 [2024-11-27 11:25:20.799616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.096 [2024-11-27 11:25:20.799622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.805594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.805620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:52.096 [2024-11-27 11:25:20.805632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.096 [2024-11-27 11:25:20.805637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.096 [2024-11-27 11:25:20.805674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.096 [2024-11-27 11:25:20.805681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:52.096 [2024-11-27 11:25:20.805688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.097 [2024-11-27 11:25:20.805694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.097 [2024-11-27 11:25:20.805732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.097 [2024-11-27 11:25:20.805739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:52.097 [2024-11-27 11:25:20.805749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.097 [2024-11-27 11:25:20.805754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.097 [2024-11-27 11:25:20.805806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.097 [2024-11-27 11:25:20.805814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:52.097 [2024-11-27 11:25:20.805820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.097 [2024-11-27 11:25:20.805825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.097 [2024-11-27 11:25:20.805849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.097 [2024-11-27 11:25:20.805855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:52.097 [2024-11-27 11:25:20.805861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.097 [2024-11-27 11:25:20.805867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.097 [2024-11-27 11:25:20.805908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.097 [2024-11-27 11:25:20.805915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:52.097 [2024-11-27 11:25:20.805921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.097 [2024-11-27 11:25:20.805926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.097 [2024-11-27 11:25:20.805960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.097 [2024-11-27 11:25:20.805967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:52.097 [2024-11-27 11:25:20.805973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.097 [2024-11-27 11:25:20.805979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.097 [2024-11-27 11:25:20.806074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 29.854 ms, result 0 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:52.358 Remove shared memory files 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93017 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:52.358 00:29:52.358 real 1m12.963s 00:29:52.358 user 1m38.134s 00:29:52.358 sys 0m20.507s 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.358 ************************************ 00:29:52.358 END TEST ftl_upgrade_shutdown 00:29:52.358 ************************************ 00:29:52.358 11:25:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.358 11:25:21 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:52.358 11:25:21 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:52.358 11:25:21 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:29:52.358 11:25:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.358 11:25:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:52.358 ************************************ 00:29:52.358 START TEST ftl_restore_fast 00:29:52.358 ************************************ 00:29:52.358 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:52.619 * Looking for test storage... 00:29:52.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:52.619 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:52.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.620 --rc genhtml_branch_coverage=1 00:29:52.620 --rc genhtml_function_coverage=1 00:29:52.620 --rc genhtml_legend=1 00:29:52.620 --rc geninfo_all_blocks=1 00:29:52.620 --rc geninfo_unexecuted_blocks=1 00:29:52.620 00:29:52.620 ' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:52.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.620 --rc genhtml_branch_coverage=1 00:29:52.620 --rc genhtml_function_coverage=1 00:29:52.620 --rc genhtml_legend=1 00:29:52.620 --rc geninfo_all_blocks=1 00:29:52.620 --rc geninfo_unexecuted_blocks=1 00:29:52.620 00:29:52.620 ' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:52.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.620 --rc genhtml_branch_coverage=1 00:29:52.620 --rc genhtml_function_coverage=1 00:29:52.620 --rc genhtml_legend=1 00:29:52.620 --rc geninfo_all_blocks=1 00:29:52.620 --rc geninfo_unexecuted_blocks=1 00:29:52.620 00:29:52.620 ' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:52.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.620 --rc genhtml_branch_coverage=1 00:29:52.620 --rc genhtml_function_coverage=1 00:29:52.620 --rc genhtml_legend=1 00:29:52.620 --rc geninfo_all_blocks=1 00:29:52.620 --rc geninfo_unexecuted_blocks=1 00:29:52.620 00:29:52.620 ' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.f3ANUEZc8n 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93441 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93441 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93441 ']' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.620 11:25:21 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:52.620 [2024-11-27 11:25:21.477238] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:52.620 [2024-11-27 11:25:21.477390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93441 ] 00:29:52.882 [2024-11-27 11:25:21.627568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.882 [2024-11-27 11:25:21.661553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.455 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:53.456 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:53.717 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:53.979 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:53.979 { 00:29:53.979 "name": "nvme0n1", 00:29:53.979 "aliases": [ 00:29:53.979 "4c9fed09-0e17-4385-ba60-887812412c7f" 00:29:53.979 ], 00:29:53.979 "product_name": "NVMe disk", 00:29:53.979 "block_size": 4096, 00:29:53.979 "num_blocks": 1310720, 00:29:53.979 "uuid": "4c9fed09-0e17-4385-ba60-887812412c7f", 00:29:53.979 "numa_id": -1, 00:29:53.979 "assigned_rate_limits": { 00:29:53.979 "rw_ios_per_sec": 0, 00:29:53.979 "rw_mbytes_per_sec": 0, 00:29:53.979 "r_mbytes_per_sec": 0, 00:29:53.979 "w_mbytes_per_sec": 0 00:29:53.979 }, 00:29:53.979 "claimed": true, 00:29:53.979 "claim_type": "read_many_write_one", 00:29:53.979 "zoned": false, 00:29:53.979 "supported_io_types": { 00:29:53.979 "read": true, 00:29:53.979 "write": true, 00:29:53.979 "unmap": true, 00:29:53.979 "flush": true, 00:29:53.979 "reset": true, 00:29:53.979 "nvme_admin": true, 00:29:53.979 "nvme_io": true, 00:29:53.979 "nvme_io_md": false, 00:29:53.979 "write_zeroes": true, 00:29:53.979 "zcopy": false, 00:29:53.979 "get_zone_info": false, 00:29:53.979 "zone_management": false, 00:29:53.979 "zone_append": false, 00:29:53.979 "compare": true, 00:29:53.979 "compare_and_write": false, 00:29:53.979 "abort": true, 00:29:53.979 "seek_hole": false, 00:29:53.979 "seek_data": false, 00:29:53.979 "copy": true, 00:29:53.979 "nvme_iov_md": false 00:29:53.979 }, 00:29:53.979 "driver_specific": { 00:29:53.979 "nvme": [ 00:29:53.979 { 00:29:53.979 "pci_address": "0000:00:11.0", 00:29:53.979 "trid": { 00:29:53.979 "trtype": "PCIe", 00:29:53.979 "traddr": "0000:00:11.0" 00:29:53.979 }, 00:29:53.979 "ctrlr_data": { 00:29:53.979 "cntlid": 0, 00:29:53.979 "vendor_id": "0x1b36", 00:29:53.979 "model_number": "QEMU NVMe Ctrl", 00:29:53.979 "serial_number": "12341", 00:29:53.979 "firmware_revision": "8.0.0", 00:29:53.979 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:53.979 "oacs": { 00:29:53.979 "security": 0, 00:29:53.979 "format": 1, 00:29:53.979 "firmware": 0, 00:29:53.979 "ns_manage": 1 00:29:53.979 }, 00:29:53.979 "multi_ctrlr": false, 00:29:53.979 "ana_reporting": false 00:29:53.979 }, 00:29:53.979 "vs": { 00:29:53.979 "nvme_version": "1.4" 00:29:53.979 }, 00:29:53.979 "ns_data": { 00:29:53.979 "id": 1, 00:29:53.979 "can_share": false 00:29:53.979 } 00:29:53.979 } 00:29:53.979 ], 00:29:53.979 "mp_policy": "active_passive" 00:29:53.979 } 00:29:53.979 } 00:29:53.979 ]' 00:29:53.979 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:53.979 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:53.979 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.242 11:25:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:54.242 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=f970c4b1-f33b-4051-a481-b0f7154c382e 00:29:54.242 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:54.242 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f970c4b1-f33b-4051-a481-b0f7154c382e 00:29:54.502 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:54.762 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ac950b67-84b9-44ba-bf31-b0998524143d 00:29:54.762 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ac950b67-84b9-44ba-bf31-b0998524143d 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:55.023 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:55.285 { 00:29:55.285 "name": "82c4936e-44e5-4b50-9efd-028d1560a1cd", 00:29:55.285 "aliases": [ 00:29:55.285 "lvs/nvme0n1p0" 00:29:55.285 ], 00:29:55.285 "product_name": "Logical Volume", 00:29:55.285 "block_size": 4096, 00:29:55.285 "num_blocks": 26476544, 00:29:55.285 "uuid": "82c4936e-44e5-4b50-9efd-028d1560a1cd", 00:29:55.285 "assigned_rate_limits": { 00:29:55.285 "rw_ios_per_sec": 0, 00:29:55.285 "rw_mbytes_per_sec": 0, 00:29:55.285 "r_mbytes_per_sec": 0, 00:29:55.285 "w_mbytes_per_sec": 0 00:29:55.285 }, 00:29:55.285 "claimed": false, 00:29:55.285 "zoned": false, 00:29:55.285 "supported_io_types": { 00:29:55.285 "read": true, 00:29:55.285 "write": true, 00:29:55.285 "unmap": true, 00:29:55.285 "flush": false, 00:29:55.285 "reset": true, 00:29:55.285 "nvme_admin": false, 00:29:55.285 "nvme_io": false, 00:29:55.285 "nvme_io_md": false, 00:29:55.285 "write_zeroes": true, 00:29:55.285 "zcopy": false, 00:29:55.285 "get_zone_info": false, 00:29:55.285 "zone_management": false, 00:29:55.285 "zone_append": false, 00:29:55.285 "compare": false, 00:29:55.285 "compare_and_write": false, 00:29:55.285 "abort": false, 00:29:55.285 "seek_hole": true, 00:29:55.285 "seek_data": true, 00:29:55.285 "copy": false, 00:29:55.285 "nvme_iov_md": false 00:29:55.285 }, 00:29:55.285 "driver_specific": { 00:29:55.285 "lvol": { 00:29:55.285 "lvol_store_uuid": "ac950b67-84b9-44ba-bf31-b0998524143d", 00:29:55.285 "base_bdev": "nvme0n1", 00:29:55.285 "thin_provision": true, 00:29:55.285 "num_allocated_clusters": 0, 00:29:55.285 "snapshot": false, 00:29:55.285 "clone": false, 00:29:55.285 "esnap_clone": false 00:29:55.285 } 00:29:55.285 } 00:29:55.285 } 00:29:55.285 ]' 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:55.285 11:25:23 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:55.547 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.809 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:55.810 { 00:29:55.810 "name": "82c4936e-44e5-4b50-9efd-028d1560a1cd", 00:29:55.810 "aliases": [ 00:29:55.810 "lvs/nvme0n1p0" 00:29:55.810 ], 00:29:55.810 "product_name": "Logical Volume", 00:29:55.810 "block_size": 4096, 00:29:55.810 "num_blocks": 26476544, 00:29:55.810 "uuid": "82c4936e-44e5-4b50-9efd-028d1560a1cd", 00:29:55.810 "assigned_rate_limits": { 00:29:55.810 "rw_ios_per_sec": 0, 00:29:55.810 "rw_mbytes_per_sec": 0, 00:29:55.810 "r_mbytes_per_sec": 0, 00:29:55.810 "w_mbytes_per_sec": 0 00:29:55.810 }, 00:29:55.810 "claimed": false, 00:29:55.810 "zoned": false, 00:29:55.810 "supported_io_types": { 00:29:55.810 "read": true, 00:29:55.810 "write": true, 00:29:55.810 "unmap": true, 00:29:55.810 "flush": false, 00:29:55.810 "reset": true, 00:29:55.810 "nvme_admin": false, 00:29:55.810 "nvme_io": false, 00:29:55.810 "nvme_io_md": false, 00:29:55.810 "write_zeroes": true, 00:29:55.810 "zcopy": false, 00:29:55.810 "get_zone_info": false, 00:29:55.810 "zone_management": false, 00:29:55.810 "zone_append": false, 00:29:55.810 "compare": false, 00:29:55.810 "compare_and_write": false, 00:29:55.810 "abort": false, 00:29:55.810 "seek_hole": true, 00:29:55.810 "seek_data": true, 00:29:55.810 "copy": false, 00:29:55.810 "nvme_iov_md": false 00:29:55.810 }, 00:29:55.810 "driver_specific": { 00:29:55.810 "lvol": { 00:29:55.810 "lvol_store_uuid": "ac950b67-84b9-44ba-bf31-b0998524143d", 00:29:55.810 "base_bdev": "nvme0n1", 00:29:55.810 "thin_provision": true, 00:29:55.810 "num_allocated_clusters": 0, 00:29:55.810 "snapshot": false, 00:29:55.810 "clone": false, 00:29:55.810 "esnap_clone": false 00:29:55.810 } 00:29:55.810 } 00:29:55.810 } 00:29:55.810 ]' 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:55.810 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 82c4936e-44e5-4b50-9efd-028d1560a1cd 00:29:56.069 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:56.069 { 00:29:56.069 "name": "82c4936e-44e5-4b50-9efd-028d1560a1cd", 00:29:56.069 "aliases": [ 00:29:56.069 "lvs/nvme0n1p0" 00:29:56.069 ], 00:29:56.069 "product_name": "Logical Volume", 00:29:56.069 "block_size": 4096, 00:29:56.069 "num_blocks": 26476544, 00:29:56.069 "uuid": "82c4936e-44e5-4b50-9efd-028d1560a1cd", 00:29:56.069 "assigned_rate_limits": { 00:29:56.069 "rw_ios_per_sec": 0, 00:29:56.069 "rw_mbytes_per_sec": 0, 00:29:56.069 "r_mbytes_per_sec": 0, 00:29:56.069 "w_mbytes_per_sec": 0 00:29:56.069 }, 00:29:56.069 "claimed": false, 00:29:56.069 "zoned": false, 00:29:56.069 "supported_io_types": { 00:29:56.069 "read": true, 00:29:56.069 "write": true, 00:29:56.069 "unmap": true, 00:29:56.069 "flush": false, 00:29:56.069 "reset": true, 00:29:56.069 "nvme_admin": false, 00:29:56.069 "nvme_io": false, 00:29:56.069 "nvme_io_md": false, 00:29:56.070 "write_zeroes": true, 00:29:56.070 "zcopy": false, 00:29:56.070 "get_zone_info": false, 00:29:56.070 "zone_management": false, 00:29:56.070 "zone_append": false, 00:29:56.070 "compare": false, 00:29:56.070 "compare_and_write": false, 00:29:56.070 "abort": false, 00:29:56.070 "seek_hole": true, 00:29:56.070 "seek_data": true, 00:29:56.070 "copy": false, 00:29:56.070 "nvme_iov_md": false 00:29:56.070 }, 00:29:56.070 "driver_specific": { 00:29:56.070 "lvol": { 00:29:56.070 "lvol_store_uuid": "ac950b67-84b9-44ba-bf31-b0998524143d", 00:29:56.070 "base_bdev": "nvme0n1", 00:29:56.070 "thin_provision": true, 00:29:56.070 "num_allocated_clusters": 0, 00:29:56.070 "snapshot": false, 00:29:56.070 "clone": false, 00:29:56.070 "esnap_clone": false 00:29:56.070 } 00:29:56.070 } 00:29:56.070 } 00:29:56.070 ]' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 82c4936e-44e5-4b50-9efd-028d1560a1cd --l2p_dram_limit 10' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:56.070 11:25:24 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 82c4936e-44e5-4b50-9efd-028d1560a1cd --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:56.330 [2024-11-27 11:25:25.050272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.050319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:56.330 [2024-11-27 11:25:25.050331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.330 [2024-11-27 11:25:25.050339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.050374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.050383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:56.330 [2024-11-27 11:25:25.050392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:56.330 [2024-11-27 11:25:25.050404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.050428] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:56.330 [2024-11-27 11:25:25.050606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:56.330 [2024-11-27 11:25:25.050618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.050628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:56.330 [2024-11-27 11:25:25.050637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:29:56.330 [2024-11-27 11:25:25.050646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.050669] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bc5eda24-0ce9-482a-92cf-9e0de2879e0b 00:29:56.330 [2024-11-27 11:25:25.051983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.052008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:56.330 [2024-11-27 11:25:25.052021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:56.330 [2024-11-27 11:25:25.052028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.058951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.058978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:56.330 [2024-11-27 11:25:25.058988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:29:56.330 [2024-11-27 11:25:25.058997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.059059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.059068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:56.330 [2024-11-27 11:25:25.059078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:56.330 [2024-11-27 11:25:25.059087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.059126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.059134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:56.330 [2024-11-27 11:25:25.059142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:56.330 [2024-11-27 11:25:25.059147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.059165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:56.330 [2024-11-27 11:25:25.060842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.060871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:56.330 [2024-11-27 11:25:25.060881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:29:56.330 [2024-11-27 11:25:25.060905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.060933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.060941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:56.330 [2024-11-27 11:25:25.060947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:56.330 [2024-11-27 11:25:25.060957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.330 [2024-11-27 11:25:25.060970] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:56.330 [2024-11-27 11:25:25.061108] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:56.330 [2024-11-27 11:25:25.061120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:56.330 [2024-11-27 11:25:25.061130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:56.330 [2024-11-27 11:25:25.061138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:56.330 [2024-11-27 11:25:25.061151] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:56.330 [2024-11-27 11:25:25.061157] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:56.330 [2024-11-27 11:25:25.061168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:56.330 [2024-11-27 11:25:25.061175] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:56.330 [2024-11-27 11:25:25.061182] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:56.330 [2024-11-27 11:25:25.061190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.330 [2024-11-27 11:25:25.061197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:56.331 [2024-11-27 11:25:25.061204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:29:56.331 [2024-11-27 11:25:25.061211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.331 [2024-11-27 11:25:25.061276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.331 [2024-11-27 11:25:25.061285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:56.331 [2024-11-27 11:25:25.061291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:56.331 [2024-11-27 11:25:25.061298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.331 [2024-11-27 11:25:25.061372] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:56.331 [2024-11-27 11:25:25.061385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:56.331 [2024-11-27 11:25:25.061393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:56.331 [2024-11-27 11:25:25.061414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:56.331 [2024-11-27 11:25:25.061432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:56.331 [2024-11-27 11:25:25.061444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:56.331 [2024-11-27 11:25:25.061453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:56.331 [2024-11-27 11:25:25.061460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:56.331 [2024-11-27 11:25:25.061469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:56.331 [2024-11-27 11:25:25.061474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:56.331 [2024-11-27 11:25:25.061480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:56.331 [2024-11-27 11:25:25.061493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:56.331 [2024-11-27 11:25:25.061513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:56.331 [2024-11-27 11:25:25.061533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:56.331 [2024-11-27 11:25:25.061552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:56.331 [2024-11-27 11:25:25.061573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:56.331 [2024-11-27 11:25:25.061593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:56.331 [2024-11-27 11:25:25.061605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:56.331 [2024-11-27 11:25:25.061612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:56.331 [2024-11-27 11:25:25.061619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:56.331 [2024-11-27 11:25:25.061629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:56.331 [2024-11-27 11:25:25.061635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:56.331 [2024-11-27 11:25:25.061644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:56.331 [2024-11-27 11:25:25.061658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:56.331 [2024-11-27 11:25:25.061665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061672] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:56.331 [2024-11-27 11:25:25.061681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:56.331 [2024-11-27 11:25:25.061691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:56.331 [2024-11-27 11:25:25.061710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:56.331 [2024-11-27 11:25:25.061716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:56.331 [2024-11-27 11:25:25.061723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:56.331 [2024-11-27 11:25:25.061729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:56.331 [2024-11-27 11:25:25.061737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:56.331 [2024-11-27 11:25:25.061742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:56.331 [2024-11-27 11:25:25.061753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:56.331 [2024-11-27 11:25:25.061762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.331 [2024-11-27 11:25:25.061771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:56.331 [2024-11-27 11:25:25.061777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:56.331 [2024-11-27 11:25:25.061785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:56.331 [2024-11-27 11:25:25.061791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:56.331 [2024-11-27 11:25:25.061799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:56.331 [2024-11-27 11:25:25.061808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:56.331 [2024-11-27 11:25:25.061818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:56.331 [2024-11-27 11:25:25.061825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:56.331 [2024-11-27 11:25:25.061833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:56.331 [2024-11-27 11:25:25.061839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:56.331 [2024-11-27 11:25:25.061847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:56.331 [2024-11-27 11:25:25.061854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:56.331 [2024-11-27 11:25:25.061861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:56.332 [2024-11-27 11:25:25.061870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:56.332 [2024-11-27 11:25:25.061877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:56.332 [2024-11-27 11:25:25.061886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.332 [2024-11-27 11:25:25.061909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.332 [2024-11-27 11:25:25.061915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:56.332 [2024-11-27 11:25:25.061922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:56.332 [2024-11-27 11:25:25.061928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:56.332 [2024-11-27 11:25:25.061936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.332 [2024-11-27 11:25:25.061942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:56.332 [2024-11-27 11:25:25.061952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:29:56.332 [2024-11-27 11:25:25.061959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.332 [2024-11-27 11:25:25.061989] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:56.332 [2024-11-27 11:25:25.061997] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:58.870 [2024-11-27 11:25:27.725300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.725505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:58.870 [2024-11-27 11:25:27.725534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2663.294 ms 00:29:58.870 [2024-11-27 11:25:27.725543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.870 [2024-11-27 11:25:27.736272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.736396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:58.870 [2024-11-27 11:25:27.736458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.644 ms 00:29:58.870 [2024-11-27 11:25:27.736483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.870 [2024-11-27 11:25:27.736587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.736610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:58.870 [2024-11-27 11:25:27.736624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:58.870 [2024-11-27 11:25:27.736633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.870 [2024-11-27 11:25:27.746392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.746429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:58.870 [2024-11-27 11:25:27.746441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:29:58.870 [2024-11-27 11:25:27.746449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.870 [2024-11-27 11:25:27.746485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.746496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:58.870 [2024-11-27 11:25:27.746508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:58.870 [2024-11-27 11:25:27.746515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.870 [2024-11-27 11:25:27.746942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.746964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:58.870 [2024-11-27 11:25:27.746976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:29:58.870 [2024-11-27 11:25:27.746984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.870 [2024-11-27 11:25:27.747107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.870 [2024-11-27 11:25:27.747117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:58.870 [2024-11-27 11:25:27.747130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:58.870 [2024-11-27 11:25:27.747139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.768617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.768707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.129 [2024-11-27 11:25:27.768744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.444 ms 00:29:59.129 [2024-11-27 11:25:27.768765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.778048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:59.129 [2024-11-27 11:25:27.781410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.781446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:59.129 [2024-11-27 11:25:27.781457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.319 ms 00:29:59.129 [2024-11-27 11:25:27.781467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.828485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.828625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:59.129 [2024-11-27 11:25:27.828642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.992 ms 00:29:59.129 [2024-11-27 11:25:27.828656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.828905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.828921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.129 [2024-11-27 11:25:27.828930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:29:59.129 [2024-11-27 11:25:27.828941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.831771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.831910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:59.129 [2024-11-27 11:25:27.831926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:29:59.129 [2024-11-27 11:25:27.831937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.834571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.834606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:59.129 [2024-11-27 11:25:27.834617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:29:59.129 [2024-11-27 11:25:27.834627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.834931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.834949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.129 [2024-11-27 11:25:27.834958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:29:59.129 [2024-11-27 11:25:27.834970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.862178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.862216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:59.129 [2024-11-27 11:25:27.862227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.189 ms 00:29:59.129 [2024-11-27 11:25:27.862238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.866321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.866358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:59.129 [2024-11-27 11:25:27.866370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:29:59.129 [2024-11-27 11:25:27.866381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.869523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.869558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:59.129 [2024-11-27 11:25:27.869567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:29:59.129 [2024-11-27 11:25:27.869576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.872628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.872666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.129 [2024-11-27 11:25:27.872677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.020 ms 00:29:59.129 [2024-11-27 11:25:27.872689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.872726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.872739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.129 [2024-11-27 11:25:27.872749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.129 [2024-11-27 11:25:27.872760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.872836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.129 [2024-11-27 11:25:27.872848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.129 [2024-11-27 11:25:27.872857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:59.129 [2024-11-27 11:25:27.872872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.129 [2024-11-27 11:25:27.873958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2823.240 ms, result 0 00:29:59.129 { 00:29:59.129 "name": "ftl0", 00:29:59.129 "uuid": "bc5eda24-0ce9-482a-92cf-9e0de2879e0b" 00:29:59.129 } 00:29:59.129 11:25:27 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:59.129 11:25:27 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:59.388 11:25:28 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:59.388 11:25:28 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:59.648 [2024-11-27 11:25:28.285291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.285335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:59.648 [2024-11-27 11:25:28.285350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:59.648 [2024-11-27 11:25:28.285358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.285388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:59.648 [2024-11-27 11:25:28.285964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.285996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:59.648 [2024-11-27 11:25:28.286006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:29:59.648 [2024-11-27 11:25:28.286016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.286273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.286297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:59.648 [2024-11-27 11:25:28.286307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:29:59.648 [2024-11-27 11:25:28.286318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.289564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.289589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:59.648 [2024-11-27 11:25:28.289599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:29:59.648 [2024-11-27 11:25:28.289609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.295773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.295955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:59.648 [2024-11-27 11:25:28.295972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:29:59.648 [2024-11-27 11:25:28.295983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.298079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.298113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:59.648 [2024-11-27 11:25:28.298122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:29:59.648 [2024-11-27 11:25:28.298131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.303250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.303373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:59.648 [2024-11-27 11:25:28.303388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.087 ms 00:29:59.648 [2024-11-27 11:25:28.303403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.303526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.303539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:59.648 [2024-11-27 11:25:28.303548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:29:59.648 [2024-11-27 11:25:28.303558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.305907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.305941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:59.648 [2024-11-27 11:25:28.305950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:29:59.648 [2024-11-27 11:25:28.305959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.307661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.307696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:59.648 [2024-11-27 11:25:28.307705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:29:59.648 [2024-11-27 11:25:28.307714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.309198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.309233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:59.648 [2024-11-27 11:25:28.309241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:29:59.648 [2024-11-27 11:25:28.309250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.648 [2024-11-27 11:25:28.310874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.648 [2024-11-27 11:25:28.310999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:59.648 [2024-11-27 11:25:28.311013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:29:59.649 [2024-11-27 11:25:28.311022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.649 [2024-11-27 11:25:28.311051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:59.649 [2024-11-27 11:25:28.311067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:59.649 [2024-11-27 11:25:28.311632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:59.650 [2024-11-27 11:25:28.311951] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:59.650 [2024-11-27 11:25:28.311960] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc5eda24-0ce9-482a-92cf-9e0de2879e0b 00:29:59.650 [2024-11-27 11:25:28.311969] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:59.650 [2024-11-27 11:25:28.311977] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:59.650 [2024-11-27 11:25:28.311985] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:59.650 [2024-11-27 11:25:28.311993] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:59.650 [2024-11-27 11:25:28.312003] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:59.650 [2024-11-27 11:25:28.312010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:59.650 [2024-11-27 11:25:28.312020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:59.650 [2024-11-27 11:25:28.312027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:59.650 [2024-11-27 11:25:28.312035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:59.650 [2024-11-27 11:25:28.312042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.650 [2024-11-27 11:25:28.312053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:59.650 [2024-11-27 11:25:28.312061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:29:59.650 [2024-11-27 11:25:28.312069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.313762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.650 [2024-11-27 11:25:28.313785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:59.650 [2024-11-27 11:25:28.313794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:29:59.650 [2024-11-27 11:25:28.313804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.313928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.650 [2024-11-27 11:25:28.313942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:59.650 [2024-11-27 11:25:28.313950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:59.650 [2024-11-27 11:25:28.313959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.320418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.320457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.650 [2024-11-27 11:25:28.320467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.650 [2024-11-27 11:25:28.320476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.320533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.320543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.650 [2024-11-27 11:25:28.320551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.650 [2024-11-27 11:25:28.320561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.320627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.320642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.650 [2024-11-27 11:25:28.320650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.650 [2024-11-27 11:25:28.320659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.320676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.320687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.650 [2024-11-27 11:25:28.320698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.650 [2024-11-27 11:25:28.320706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.331732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.331772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.650 [2024-11-27 11:25:28.331782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.650 [2024-11-27 11:25:28.331791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.341592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.341723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.650 [2024-11-27 11:25:28.341778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.650 [2024-11-27 11:25:28.341807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.650 [2024-11-27 11:25:28.341903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.650 [2024-11-27 11:25:28.341967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.651 [2024-11-27 11:25:28.342035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.651 [2024-11-27 11:25:28.342066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.651 [2024-11-27 11:25:28.342133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.651 [2024-11-27 11:25:28.342160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.651 [2024-11-27 11:25:28.342230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.651 [2024-11-27 11:25:28.342256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.651 [2024-11-27 11:25:28.342342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.651 [2024-11-27 11:25:28.342368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.651 [2024-11-27 11:25:28.342436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.651 [2024-11-27 11:25:28.342461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.651 [2024-11-27 11:25:28.342517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.651 [2024-11-27 11:25:28.342580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:59.651 [2024-11-27 11:25:28.342603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.651 [2024-11-27 11:25:28.342626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.651 [2024-11-27 11:25:28.342700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.651 [2024-11-27 11:25:28.342770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.651 [2024-11-27 11:25:28.342816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.651 [2024-11-27 11:25:28.342840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.651 [2024-11-27 11:25:28.342912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:59.651 [2024-11-27 11:25:28.342992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.651 [2024-11-27 11:25:28.343017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:59.651 [2024-11-27 11:25:28.343039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.651 [2024-11-27 11:25:28.343188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.863 ms, result 0 00:29:59.651 true 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93441 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93441 ']' 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93441 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93441 00:29:59.651 killing process with pid 93441 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93441' 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93441 00:29:59.651 11:25:28 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93441 00:30:04.961 11:25:33 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:09.158 262144+0 records in 00:30:09.158 262144+0 records out 00:30:09.158 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.02615 s, 267 MB/s 00:30:09.158 11:25:37 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:11.066 11:25:39 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:11.067 [2024-11-27 11:25:39.587629] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:11.067 [2024-11-27 11:25:39.587722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93646 ] 00:30:11.067 [2024-11-27 11:25:39.730876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.067 [2024-11-27 11:25:39.784239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.067 [2024-11-27 11:25:39.886909] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:11.067 [2024-11-27 11:25:39.887142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:11.328 [2024-11-27 11:25:40.042033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.328 [2024-11-27 11:25:40.042241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:11.328 [2024-11-27 11:25:40.042317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:11.329 [2024-11-27 11:25:40.042342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.042417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.042443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.329 [2024-11-27 11:25:40.042463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:11.329 [2024-11-27 11:25:40.042481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.042513] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:11.329 [2024-11-27 11:25:40.043167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:11.329 [2024-11-27 11:25:40.043287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.043335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.329 [2024-11-27 11:25:40.043363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:30:11.329 [2024-11-27 11:25:40.043389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.044824] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:11.329 [2024-11-27 11:25:40.047563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.047667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:11.329 [2024-11-27 11:25:40.047715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:30:11.329 [2024-11-27 11:25:40.047746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.047818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.047842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:11.329 [2024-11-27 11:25:40.047862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:11.329 [2024-11-27 11:25:40.047883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.054364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.054462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.329 [2024-11-27 11:25:40.054509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:30:11.329 [2024-11-27 11:25:40.054539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.054636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.054663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.329 [2024-11-27 11:25:40.054724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:11.329 [2024-11-27 11:25:40.054747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.054804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.054906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:11.329 [2024-11-27 11:25:40.054928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:11.329 [2024-11-27 11:25:40.054975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.055021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:11.329 [2024-11-27 11:25:40.056729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.056819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.329 [2024-11-27 11:25:40.056863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:30:11.329 [2024-11-27 11:25:40.056885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.056959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.056981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:11.329 [2024-11-27 11:25:40.057006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:11.329 [2024-11-27 11:25:40.057024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.057083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:11.329 [2024-11-27 11:25:40.057124] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:11.329 [2024-11-27 11:25:40.057233] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:11.329 [2024-11-27 11:25:40.057275] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:11.329 [2024-11-27 11:25:40.057510] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:11.329 [2024-11-27 11:25:40.057522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:11.329 [2024-11-27 11:25:40.057533] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:11.329 [2024-11-27 11:25:40.057544] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:11.329 [2024-11-27 11:25:40.057557] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:11.329 [2024-11-27 11:25:40.057565] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:11.329 [2024-11-27 11:25:40.057573] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:11.329 [2024-11-27 11:25:40.057580] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:11.329 [2024-11-27 11:25:40.057593] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:11.329 [2024-11-27 11:25:40.057601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.057612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:11.329 [2024-11-27 11:25:40.057623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:30:11.329 [2024-11-27 11:25:40.057630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.057722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.329 [2024-11-27 11:25:40.057737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:11.329 [2024-11-27 11:25:40.057745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:11.329 [2024-11-27 11:25:40.057752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.329 [2024-11-27 11:25:40.057850] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:11.329 [2024-11-27 11:25:40.057860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:11.329 [2024-11-27 11:25:40.057868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.329 [2024-11-27 11:25:40.057882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.329 [2024-11-27 11:25:40.057981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:11.329 [2024-11-27 11:25:40.058007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:11.329 [2024-11-27 11:25:40.058046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:11.329 [2024-11-27 11:25:40.058089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.329 [2024-11-27 11:25:40.058129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:11.329 [2024-11-27 11:25:40.058147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:11.329 [2024-11-27 11:25:40.058192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.329 [2024-11-27 11:25:40.058214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:11.329 [2024-11-27 11:25:40.058232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:11.329 [2024-11-27 11:25:40.058251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:11.329 [2024-11-27 11:25:40.058289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:11.329 [2024-11-27 11:25:40.058307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:11.329 [2024-11-27 11:25:40.058343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.329 [2024-11-27 11:25:40.058379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:11.329 [2024-11-27 11:25:40.058397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.329 [2024-11-27 11:25:40.058433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:11.329 [2024-11-27 11:25:40.058478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.329 [2024-11-27 11:25:40.058522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:11.329 [2024-11-27 11:25:40.058541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.329 [2024-11-27 11:25:40.058577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:11.329 [2024-11-27 11:25:40.058628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.329 [2024-11-27 11:25:40.058644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:11.329 [2024-11-27 11:25:40.058652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:11.329 [2024-11-27 11:25:40.058659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.329 [2024-11-27 11:25:40.058665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:11.329 [2024-11-27 11:25:40.058673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:11.329 [2024-11-27 11:25:40.058680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.329 [2024-11-27 11:25:40.058687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:11.329 [2024-11-27 11:25:40.058694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:11.329 [2024-11-27 11:25:40.058701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.330 [2024-11-27 11:25:40.058708] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:11.330 [2024-11-27 11:25:40.058719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:11.330 [2024-11-27 11:25:40.058727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.330 [2024-11-27 11:25:40.058737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.330 [2024-11-27 11:25:40.058744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:11.330 [2024-11-27 11:25:40.058751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:11.330 [2024-11-27 11:25:40.058758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:11.330 [2024-11-27 11:25:40.058766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:11.330 [2024-11-27 11:25:40.058772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:11.330 [2024-11-27 11:25:40.058779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:11.330 [2024-11-27 11:25:40.058788] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:11.330 [2024-11-27 11:25:40.058797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:11.330 [2024-11-27 11:25:40.058813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:11.330 [2024-11-27 11:25:40.058820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:11.330 [2024-11-27 11:25:40.058827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:11.330 [2024-11-27 11:25:40.058834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:11.330 [2024-11-27 11:25:40.058843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:11.330 [2024-11-27 11:25:40.058851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:11.330 [2024-11-27 11:25:40.058858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:11.330 [2024-11-27 11:25:40.058865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:11.330 [2024-11-27 11:25:40.058872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:11.330 [2024-11-27 11:25:40.058921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:11.330 [2024-11-27 11:25:40.058930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.330 [2024-11-27 11:25:40.058946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:11.330 [2024-11-27 11:25:40.058954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:11.330 [2024-11-27 11:25:40.058961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:11.330 [2024-11-27 11:25:40.058969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.058980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:11.330 [2024-11-27 11:25:40.058988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:30:11.330 [2024-11-27 11:25:40.058995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.080808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.080965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.330 [2024-11-27 11:25:40.080989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.732 ms 00:30:11.330 [2024-11-27 11:25:40.080999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.081110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.081122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:11.330 [2024-11-27 11:25:40.081132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:30:11.330 [2024-11-27 11:25:40.081139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.092745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.092923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.330 [2024-11-27 11:25:40.092946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.547 ms 00:30:11.330 [2024-11-27 11:25:40.092958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.093001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.093015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.330 [2024-11-27 11:25:40.093027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:11.330 [2024-11-27 11:25:40.093037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.093532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.093558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.330 [2024-11-27 11:25:40.093569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:30:11.330 [2024-11-27 11:25:40.093577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.093714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.093731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.330 [2024-11-27 11:25:40.093741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:30:11.330 [2024-11-27 11:25:40.093749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.099522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.099550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.330 [2024-11-27 11:25:40.099565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.753 ms 00:30:11.330 [2024-11-27 11:25:40.099572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.102725] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:11.330 [2024-11-27 11:25:40.102758] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:11.330 [2024-11-27 11:25:40.102774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.102782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:11.330 [2024-11-27 11:25:40.102790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.118 ms 00:30:11.330 [2024-11-27 11:25:40.102797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.117970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.118093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:11.330 [2024-11-27 11:25:40.118109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.135 ms 00:30:11.330 [2024-11-27 11:25:40.118120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.119983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.120010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:11.330 [2024-11-27 11:25:40.120020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:30:11.330 [2024-11-27 11:25:40.120027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.121659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.121764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:11.330 [2024-11-27 11:25:40.121778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:30:11.330 [2024-11-27 11:25:40.121786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.122210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.122225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:11.330 [2024-11-27 11:25:40.122234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:30:11.330 [2024-11-27 11:25:40.122242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.141642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.141681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:11.330 [2024-11-27 11:25:40.141701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.383 ms 00:30:11.330 [2024-11-27 11:25:40.141709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.149598] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:11.330 [2024-11-27 11:25:40.152564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.152689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:11.330 [2024-11-27 11:25:40.152705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.793 ms 00:30:11.330 [2024-11-27 11:25:40.152720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.152774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.152785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:11.330 [2024-11-27 11:25:40.152794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:11.330 [2024-11-27 11:25:40.152801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.330 [2024-11-27 11:25:40.152882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.330 [2024-11-27 11:25:40.152907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:11.330 [2024-11-27 11:25:40.152917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:11.330 [2024-11-27 11:25:40.152932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.331 [2024-11-27 11:25:40.152955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.331 [2024-11-27 11:25:40.152964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:11.331 [2024-11-27 11:25:40.152976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.331 [2024-11-27 11:25:40.152983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.331 [2024-11-27 11:25:40.153016] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:11.331 [2024-11-27 11:25:40.153027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.331 [2024-11-27 11:25:40.153056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:11.331 [2024-11-27 11:25:40.153065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:11.331 [2024-11-27 11:25:40.153073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.331 [2024-11-27 11:25:40.157446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.331 [2024-11-27 11:25:40.157482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:11.331 [2024-11-27 11:25:40.157492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.355 ms 00:30:11.331 [2024-11-27 11:25:40.157500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.331 [2024-11-27 11:25:40.157577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.331 [2024-11-27 11:25:40.157586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:11.331 [2024-11-27 11:25:40.157599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:11.331 [2024-11-27 11:25:40.157607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.331 [2024-11-27 11:25:40.158601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.122 ms, result 0 00:30:12.718  [2024-11-27T11:25:42.173Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-27T11:25:43.557Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-27T11:25:44.501Z] Copying: 49/1024 [MB] (14 MBps) [2024-11-27T11:25:45.441Z] Copying: 69/1024 [MB] (20 MBps) [2024-11-27T11:25:46.384Z] Copying: 93/1024 [MB] (24 MBps) [2024-11-27T11:25:47.328Z] Copying: 112/1024 [MB] (18 MBps) [2024-11-27T11:25:48.275Z] Copying: 133/1024 [MB] (20 MBps) [2024-11-27T11:25:49.220Z] Copying: 149/1024 [MB] (16 MBps) [2024-11-27T11:25:50.179Z] Copying: 170/1024 [MB] (20 MBps) [2024-11-27T11:25:51.554Z] Copying: 206/1024 [MB] (35 MBps) [2024-11-27T11:25:52.487Z] Copying: 250/1024 [MB] (44 MBps) [2024-11-27T11:25:53.433Z] Copying: 298/1024 [MB] (47 MBps) [2024-11-27T11:25:54.379Z] Copying: 331/1024 [MB] (33 MBps) [2024-11-27T11:25:55.322Z] Copying: 353/1024 [MB] (21 MBps) [2024-11-27T11:25:56.267Z] Copying: 371/1024 [MB] (18 MBps) [2024-11-27T11:25:57.213Z] Copying: 393/1024 [MB] (22 MBps) [2024-11-27T11:25:58.601Z] Copying: 416/1024 [MB] (22 MBps) [2024-11-27T11:25:59.174Z] Copying: 428/1024 [MB] (12 MBps) [2024-11-27T11:26:00.562Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-27T11:26:01.509Z] Copying: 458/1024 [MB] (19 MBps) [2024-11-27T11:26:02.455Z] Copying: 479/1024 [MB] (20 MBps) [2024-11-27T11:26:03.404Z] Copying: 497/1024 [MB] (18 MBps) [2024-11-27T11:26:04.349Z] Copying: 516/1024 [MB] (18 MBps) [2024-11-27T11:26:05.358Z] Copying: 536/1024 [MB] (20 MBps) [2024-11-27T11:26:06.303Z] Copying: 557/1024 [MB] (20 MBps) [2024-11-27T11:26:07.247Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-27T11:26:08.191Z] Copying: 594/1024 [MB] (21 MBps) [2024-11-27T11:26:09.578Z] Copying: 617/1024 [MB] (22 MBps) [2024-11-27T11:26:10.524Z] Copying: 639/1024 [MB] (21 MBps) [2024-11-27T11:26:11.468Z] Copying: 657/1024 [MB] (18 MBps) [2024-11-27T11:26:12.412Z] Copying: 669/1024 [MB] (11 MBps) [2024-11-27T11:26:13.358Z] Copying: 680/1024 [MB] (11 MBps) [2024-11-27T11:26:14.303Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-27T11:26:15.250Z] Copying: 701/1024 [MB] (10 MBps) [2024-11-27T11:26:16.195Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-27T11:26:17.585Z] Copying: 723/1024 [MB] (11 MBps) [2024-11-27T11:26:18.531Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-27T11:26:19.476Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-27T11:26:20.420Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-27T11:26:21.366Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-27T11:26:22.311Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-27T11:26:23.256Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-27T11:26:24.211Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-27T11:26:25.595Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-27T11:26:26.537Z] Copying: 823/1024 [MB] (10 MBps) [2024-11-27T11:26:27.477Z] Copying: 837/1024 [MB] (13 MBps) [2024-11-27T11:26:28.421Z] Copying: 854/1024 [MB] (17 MBps) [2024-11-27T11:26:29.366Z] Copying: 883/1024 [MB] (28 MBps) [2024-11-27T11:26:30.311Z] Copying: 905/1024 [MB] (22 MBps) [2024-11-27T11:26:31.254Z] Copying: 924/1024 [MB] (19 MBps) [2024-11-27T11:26:32.198Z] Copying: 943/1024 [MB] (18 MBps) [2024-11-27T11:26:33.588Z] Copying: 958/1024 [MB] (15 MBps) [2024-11-27T11:26:34.189Z] Copying: 976/1024 [MB] (17 MBps) [2024-11-27T11:26:35.579Z] Copying: 992/1024 [MB] (16 MBps) [2024-11-27T11:26:35.579Z] Copying: 1011/1024 [MB] (19 MBps) [2024-11-27T11:26:35.579Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 11:26:35.570983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.696 [2024-11-27 11:26:35.571019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:06.696 [2024-11-27 11:26:35.571030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:06.696 [2024-11-27 11:26:35.571037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.696 [2024-11-27 11:26:35.571053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:06.696 [2024-11-27 11:26:35.571464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.696 [2024-11-27 11:26:35.571478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:06.696 [2024-11-27 11:26:35.571486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:31:06.696 [2024-11-27 11:26:35.571492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.696 [2024-11-27 11:26:35.572771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.696 [2024-11-27 11:26:35.572793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:06.696 [2024-11-27 11:26:35.572807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:31:06.696 [2024-11-27 11:26:35.572813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.696 [2024-11-27 11:26:35.572835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.696 [2024-11-27 11:26:35.572844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:06.696 [2024-11-27 11:26:35.572851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:06.696 [2024-11-27 11:26:35.572857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.696 [2024-11-27 11:26:35.572916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.696 [2024-11-27 11:26:35.572924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:06.696 [2024-11-27 11:26:35.572930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:06.696 [2024-11-27 11:26:35.572940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.696 [2024-11-27 11:26:35.572951] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:06.696 [2024-11-27 11:26:35.572960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.572968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.572974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.572980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.572986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.572992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.572998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:06.696 [2024-11-27 11:26:35.573237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:06.697 [2024-11-27 11:26:35.573553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:06.697 [2024-11-27 11:26:35.573560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc5eda24-0ce9-482a-92cf-9e0de2879e0b 00:31:06.697 [2024-11-27 11:26:35.573566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:06.697 [2024-11-27 11:26:35.573575] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:06.697 [2024-11-27 11:26:35.573580] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:06.697 [2024-11-27 11:26:35.573586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:06.697 [2024-11-27 11:26:35.573591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:06.697 [2024-11-27 11:26:35.573599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:06.697 [2024-11-27 11:26:35.573607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:06.697 [2024-11-27 11:26:35.573613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:06.697 [2024-11-27 11:26:35.573618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:06.697 [2024-11-27 11:26:35.573623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.697 [2024-11-27 11:26:35.573629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:06.697 [2024-11-27 11:26:35.573635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:31:06.697 [2024-11-27 11:26:35.573641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.697 [2024-11-27 11:26:35.575128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.697 [2024-11-27 11:26:35.575212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:06.697 [2024-11-27 11:26:35.575305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:31:06.697 [2024-11-27 11:26:35.575330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.697 [2024-11-27 11:26:35.575447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.697 [2024-11-27 11:26:35.575472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:06.697 [2024-11-27 11:26:35.575522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:06.697 [2024-11-27 11:26:35.575542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.579381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.579466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:06.957 [2024-11-27 11:26:35.579506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.579523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.579575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.579651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:06.957 [2024-11-27 11:26:35.579677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.579695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.579741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.579764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:06.957 [2024-11-27 11:26:35.579820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.579839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.579860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.579876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:06.957 [2024-11-27 11:26:35.579901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.579917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.587616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.587721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:06.957 [2024-11-27 11:26:35.587760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.587776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.593946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:06.957 [2024-11-27 11:26:35.594105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:06.957 [2024-11-27 11:26:35.594217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:06.957 [2024-11-27 11:26:35.594288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:06.957 [2024-11-27 11:26:35.594352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:06.957 [2024-11-27 11:26:35.594391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:06.957 [2024-11-27 11:26:35.594440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.957 [2024-11-27 11:26:35.594487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:06.957 [2024-11-27 11:26:35.594493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.957 [2024-11-27 11:26:35.594499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.957 [2024-11-27 11:26:35.594588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 23.582 ms, result 0 00:31:07.217 00:31:07.217 00:31:07.217 11:26:36 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:07.217 [2024-11-27 11:26:36.080421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:07.217 [2024-11-27 11:26:36.080546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94243 ] 00:31:07.476 [2024-11-27 11:26:36.227300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.476 [2024-11-27 11:26:36.257268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.476 [2024-11-27 11:26:36.339689] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:07.476 [2024-11-27 11:26:36.339740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:07.737 [2024-11-27 11:26:36.481553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.481590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:07.737 [2024-11-27 11:26:36.481602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:07.737 [2024-11-27 11:26:36.481608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.481640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.481650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:07.737 [2024-11-27 11:26:36.481661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:07.737 [2024-11-27 11:26:36.481666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.481678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:07.737 [2024-11-27 11:26:36.481858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:07.737 [2024-11-27 11:26:36.481868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.481874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:07.737 [2024-11-27 11:26:36.481882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:31:07.737 [2024-11-27 11:26:36.481907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.482120] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:07.737 [2024-11-27 11:26:36.482139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.482145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:07.737 [2024-11-27 11:26:36.482152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:07.737 [2024-11-27 11:26:36.482157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.482192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.482200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:07.737 [2024-11-27 11:26:36.482206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:07.737 [2024-11-27 11:26:36.482211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.482384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.482394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:07.737 [2024-11-27 11:26:36.482400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:31:07.737 [2024-11-27 11:26:36.482406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.482460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.482468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:07.737 [2024-11-27 11:26:36.482474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:07.737 [2024-11-27 11:26:36.482482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.737 [2024-11-27 11:26:36.482497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.737 [2024-11-27 11:26:36.482503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:07.737 [2024-11-27 11:26:36.482509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:07.737 [2024-11-27 11:26:36.482514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.738 [2024-11-27 11:26:36.482527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:07.738 [2024-11-27 11:26:36.483785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.738 [2024-11-27 11:26:36.483804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:07.738 [2024-11-27 11:26:36.483816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:31:07.738 [2024-11-27 11:26:36.483821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.738 [2024-11-27 11:26:36.483848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.738 [2024-11-27 11:26:36.483855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:07.738 [2024-11-27 11:26:36.483861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:07.738 [2024-11-27 11:26:36.483866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.738 [2024-11-27 11:26:36.483882] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:07.738 [2024-11-27 11:26:36.483912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:07.738 [2024-11-27 11:26:36.483939] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:07.738 [2024-11-27 11:26:36.483951] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:07.738 [2024-11-27 11:26:36.484029] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:07.738 [2024-11-27 11:26:36.484037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:07.738 [2024-11-27 11:26:36.484048] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:07.738 [2024-11-27 11:26:36.484056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484062] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:07.738 [2024-11-27 11:26:36.484077] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:07.738 [2024-11-27 11:26:36.484083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:07.738 [2024-11-27 11:26:36.484088] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:07.738 [2024-11-27 11:26:36.484093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.738 [2024-11-27 11:26:36.484098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:07.738 [2024-11-27 11:26:36.484104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:31:07.738 [2024-11-27 11:26:36.484112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.738 [2024-11-27 11:26:36.484175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.738 [2024-11-27 11:26:36.484181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:07.738 [2024-11-27 11:26:36.484189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:07.738 [2024-11-27 11:26:36.484196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.738 [2024-11-27 11:26:36.484267] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:07.738 [2024-11-27 11:26:36.484274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:07.738 [2024-11-27 11:26:36.484281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:07.738 [2024-11-27 11:26:36.484309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:07.738 [2024-11-27 11:26:36.484327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:07.738 [2024-11-27 11:26:36.484337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:07.738 [2024-11-27 11:26:36.484342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:07.738 [2024-11-27 11:26:36.484347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:07.738 [2024-11-27 11:26:36.484351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:07.738 [2024-11-27 11:26:36.484357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:07.738 [2024-11-27 11:26:36.484361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:07.738 [2024-11-27 11:26:36.484371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:07.738 [2024-11-27 11:26:36.484387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:07.738 [2024-11-27 11:26:36.484403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:07.738 [2024-11-27 11:26:36.484417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:07.738 [2024-11-27 11:26:36.484431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:07.738 [2024-11-27 11:26:36.484448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:07.738 [2024-11-27 11:26:36.484459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:07.738 [2024-11-27 11:26:36.484465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:07.738 [2024-11-27 11:26:36.484472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:07.738 [2024-11-27 11:26:36.484480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:07.738 [2024-11-27 11:26:36.484486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:07.738 [2024-11-27 11:26:36.484491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:07.738 [2024-11-27 11:26:36.484503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:07.738 [2024-11-27 11:26:36.484509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:07.738 [2024-11-27 11:26:36.484521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:07.738 [2024-11-27 11:26:36.484527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:07.738 [2024-11-27 11:26:36.484540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:07.738 [2024-11-27 11:26:36.484546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:07.738 [2024-11-27 11:26:36.484551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:07.738 [2024-11-27 11:26:36.484557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:07.738 [2024-11-27 11:26:36.484563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:07.738 [2024-11-27 11:26:36.484569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:07.738 [2024-11-27 11:26:36.484577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:07.738 [2024-11-27 11:26:36.484586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.738 [2024-11-27 11:26:36.484593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:07.738 [2024-11-27 11:26:36.484599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:07.738 [2024-11-27 11:26:36.484606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:07.738 [2024-11-27 11:26:36.484612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:07.738 [2024-11-27 11:26:36.484618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:07.738 [2024-11-27 11:26:36.484624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:07.738 [2024-11-27 11:26:36.484630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:07.738 [2024-11-27 11:26:36.484637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:07.738 [2024-11-27 11:26:36.484643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:07.738 [2024-11-27 11:26:36.484649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:07.738 [2024-11-27 11:26:36.484655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:07.738 [2024-11-27 11:26:36.484661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:07.738 [2024-11-27 11:26:36.484667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:07.738 [2024-11-27 11:26:36.484674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:07.739 [2024-11-27 11:26:36.484681] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:07.739 [2024-11-27 11:26:36.484689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:07.739 [2024-11-27 11:26:36.484698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:07.739 [2024-11-27 11:26:36.484705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:07.739 [2024-11-27 11:26:36.484711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:07.739 [2024-11-27 11:26:36.484717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:07.739 [2024-11-27 11:26:36.484723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.484730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:07.739 [2024-11-27 11:26:36.484736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:31:07.739 [2024-11-27 11:26:36.484742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.500140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.500190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:07.739 [2024-11-27 11:26:36.500211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.366 ms 00:31:07.739 [2024-11-27 11:26:36.500222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.500352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.500365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:07.739 [2024-11-27 11:26:36.500382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:07.739 [2024-11-27 11:26:36.500392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.509798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.509943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:07.739 [2024-11-27 11:26:36.509961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.332 ms 00:31:07.739 [2024-11-27 11:26:36.509967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.509990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.509997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:07.739 [2024-11-27 11:26:36.510003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:07.739 [2024-11-27 11:26:36.510009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.510067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.510074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:07.739 [2024-11-27 11:26:36.510081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:07.739 [2024-11-27 11:26:36.510088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.510175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.510181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:07.739 [2024-11-27 11:26:36.510192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:31:07.739 [2024-11-27 11:26:36.510198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.514176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.514205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:07.739 [2024-11-27 11:26:36.514212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:31:07.739 [2024-11-27 11:26:36.514220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.514308] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:07.739 [2024-11-27 11:26:36.514318] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:07.739 [2024-11-27 11:26:36.514325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.514336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:07.739 [2024-11-27 11:26:36.514342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:07.739 [2024-11-27 11:26:36.514348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.523472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.523498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:07.739 [2024-11-27 11:26:36.523505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:31:07.739 [2024-11-27 11:26:36.523511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.523596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.523605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:07.739 [2024-11-27 11:26:36.523611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:07.739 [2024-11-27 11:26:36.523620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.523643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.523649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:07.739 [2024-11-27 11:26:36.523656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:07.739 [2024-11-27 11:26:36.523663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.523882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.523917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:07.739 [2024-11-27 11:26:36.523924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:31:07.739 [2024-11-27 11:26:36.523929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.523940] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:07.739 [2024-11-27 11:26:36.523950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.523957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:07.739 [2024-11-27 11:26:36.523963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:07.739 [2024-11-27 11:26:36.523970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.530298] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:07.739 [2024-11-27 11:26:36.530393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.530400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:07.739 [2024-11-27 11:26:36.530407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.411 ms 00:31:07.739 [2024-11-27 11:26:36.530412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.532197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.532217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:07.739 [2024-11-27 11:26:36.532225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:31:07.739 [2024-11-27 11:26:36.532231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.532292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.532300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:07.739 [2024-11-27 11:26:36.532306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:07.739 [2024-11-27 11:26:36.532311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.532328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.532335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:07.739 [2024-11-27 11:26:36.532341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:07.739 [2024-11-27 11:26:36.532347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.532367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:07.739 [2024-11-27 11:26:36.532374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.532379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:07.739 [2024-11-27 11:26:36.532384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:07.739 [2024-11-27 11:26:36.532396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.535599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.535628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:07.739 [2024-11-27 11:26:36.535639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:31:07.739 [2024-11-27 11:26:36.535644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.535696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.739 [2024-11-27 11:26:36.535703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:07.739 [2024-11-27 11:26:36.535709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:07.739 [2024-11-27 11:26:36.535714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.739 [2024-11-27 11:26:36.536703] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 54.844 ms, result 0 00:31:09.126  [2024-11-27T11:26:38.956Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-27T11:26:39.901Z] Copying: 45/1024 [MB] (21 MBps) [2024-11-27T11:26:40.846Z] Copying: 63/1024 [MB] (17 MBps) [2024-11-27T11:26:41.791Z] Copying: 81/1024 [MB] (17 MBps) [2024-11-27T11:26:42.737Z] Copying: 96/1024 [MB] (15 MBps) [2024-11-27T11:26:43.682Z] Copying: 110/1024 [MB] (13 MBps) [2024-11-27T11:26:45.071Z] Copying: 120/1024 [MB] (10 MBps) [2024-11-27T11:26:46.014Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-27T11:26:46.957Z] Copying: 142/1024 [MB] (10 MBps) [2024-11-27T11:26:47.899Z] Copying: 153/1024 [MB] (10 MBps) [2024-11-27T11:26:48.846Z] Copying: 163/1024 [MB] (10 MBps) [2024-11-27T11:26:49.792Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-27T11:26:50.738Z] Copying: 185/1024 [MB] (10 MBps) [2024-11-27T11:26:51.685Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-27T11:26:53.076Z] Copying: 207/1024 [MB] (11 MBps) [2024-11-27T11:26:54.022Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-27T11:26:54.967Z] Copying: 228/1024 [MB] (10 MBps) [2024-11-27T11:26:55.914Z] Copying: 239/1024 [MB] (10 MBps) [2024-11-27T11:26:56.860Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-27T11:26:57.807Z] Copying: 260/1024 [MB] (10 MBps) [2024-11-27T11:26:58.752Z] Copying: 271/1024 [MB] (10 MBps) [2024-11-27T11:26:59.697Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-27T11:27:01.084Z] Copying: 293/1024 [MB] (11 MBps) [2024-11-27T11:27:02.024Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-27T11:27:03.117Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-27T11:27:03.688Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-27T11:27:05.078Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-27T11:27:06.025Z] Copying: 346/1024 [MB] (10 MBps) [2024-11-27T11:27:06.970Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-27T11:27:07.914Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-27T11:27:08.858Z] Copying: 377/1024 [MB] (11 MBps) [2024-11-27T11:27:09.799Z] Copying: 388/1024 [MB] (10 MBps) [2024-11-27T11:27:10.742Z] Copying: 398/1024 [MB] (10 MBps) [2024-11-27T11:27:11.686Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-27T11:27:13.075Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-27T11:27:14.014Z] Copying: 438/1024 [MB] (19 MBps) [2024-11-27T11:27:14.957Z] Copying: 459/1024 [MB] (21 MBps) [2024-11-27T11:27:15.899Z] Copying: 477/1024 [MB] (17 MBps) [2024-11-27T11:27:16.843Z] Copying: 498/1024 [MB] (21 MBps) [2024-11-27T11:27:17.781Z] Copying: 517/1024 [MB] (18 MBps) [2024-11-27T11:27:18.722Z] Copying: 542/1024 [MB] (24 MBps) [2024-11-27T11:27:20.107Z] Copying: 562/1024 [MB] (20 MBps) [2024-11-27T11:27:20.682Z] Copying: 583/1024 [MB] (20 MBps) [2024-11-27T11:27:22.067Z] Copying: 604/1024 [MB] (20 MBps) [2024-11-27T11:27:23.010Z] Copying: 624/1024 [MB] (20 MBps) [2024-11-27T11:27:23.954Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-27T11:27:24.895Z] Copying: 653/1024 [MB] (17 MBps) [2024-11-27T11:27:25.838Z] Copying: 667/1024 [MB] (14 MBps) [2024-11-27T11:27:26.775Z] Copying: 683/1024 [MB] (15 MBps) [2024-11-27T11:27:27.716Z] Copying: 711/1024 [MB] (28 MBps) [2024-11-27T11:27:29.099Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-27T11:27:29.686Z] Copying: 744/1024 [MB] (16 MBps) [2024-11-27T11:27:31.133Z] Copying: 760/1024 [MB] (15 MBps) [2024-11-27T11:27:31.712Z] Copying: 779/1024 [MB] (18 MBps) [2024-11-27T11:27:33.098Z] Copying: 799/1024 [MB] (20 MBps) [2024-11-27T11:27:34.037Z] Copying: 821/1024 [MB] (21 MBps) [2024-11-27T11:27:34.980Z] Copying: 834/1024 [MB] (12 MBps) [2024-11-27T11:27:35.918Z] Copying: 844/1024 [MB] (10 MBps) [2024-11-27T11:27:36.859Z] Copying: 863/1024 [MB] (19 MBps) [2024-11-27T11:27:37.802Z] Copying: 879/1024 [MB] (16 MBps) [2024-11-27T11:27:38.746Z] Copying: 896/1024 [MB] (16 MBps) [2024-11-27T11:27:39.690Z] Copying: 908/1024 [MB] (12 MBps) [2024-11-27T11:27:41.071Z] Copying: 928/1024 [MB] (20 MBps) [2024-11-27T11:27:42.011Z] Copying: 942/1024 [MB] (13 MBps) [2024-11-27T11:27:42.953Z] Copying: 962/1024 [MB] (19 MBps) [2024-11-27T11:27:43.894Z] Copying: 983/1024 [MB] (20 MBps) [2024-11-27T11:27:44.839Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-27T11:27:45.411Z] Copying: 1017/1024 [MB] (21 MBps) [2024-11-27T11:27:45.411Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 11:27:45.398667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.528 [2024-11-27 11:27:45.398767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:16.528 [2024-11-27 11:27:45.398797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:16.528 [2024-11-27 11:27:45.398809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.528 [2024-11-27 11:27:45.398838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:16.528 [2024-11-27 11:27:45.399796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.528 [2024-11-27 11:27:45.399835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:16.528 [2024-11-27 11:27:45.399851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:32:16.528 [2024-11-27 11:27:45.399862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.528 [2024-11-27 11:27:45.400185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.528 [2024-11-27 11:27:45.400199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:16.528 [2024-11-27 11:27:45.400212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:32:16.528 [2024-11-27 11:27:45.400222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.528 [2024-11-27 11:27:45.400258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.528 [2024-11-27 11:27:45.400277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:16.528 [2024-11-27 11:27:45.400294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:16.528 [2024-11-27 11:27:45.400305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.528 [2024-11-27 11:27:45.400377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.528 [2024-11-27 11:27:45.400391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:16.528 [2024-11-27 11:27:45.400403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:16.528 [2024-11-27 11:27:45.400535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.528 [2024-11-27 11:27:45.400554] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:16.528 [2024-11-27 11:27:45.400570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:16.528 [2024-11-27 11:27:45.400584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:16.528 [2024-11-27 11:27:45.400594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:16.528 [2024-11-27 11:27:45.400604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:16.528 [2024-11-27 11:27:45.400614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:16.528 [2024-11-27 11:27:45.400625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:16.528 [2024-11-27 11:27:45.400636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.400989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:16.529 [2024-11-27 11:27:45.401526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:16.530 [2024-11-27 11:27:45.401666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:16.530 [2024-11-27 11:27:45.401677] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc5eda24-0ce9-482a-92cf-9e0de2879e0b 00:32:16.530 [2024-11-27 11:27:45.401692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:16.530 [2024-11-27 11:27:45.401707] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:16.530 [2024-11-27 11:27:45.401716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:16.530 [2024-11-27 11:27:45.401729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:16.530 [2024-11-27 11:27:45.401738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:16.530 [2024-11-27 11:27:45.401756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:16.530 [2024-11-27 11:27:45.401766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:16.530 [2024-11-27 11:27:45.401774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:16.530 [2024-11-27 11:27:45.401783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:16.530 [2024-11-27 11:27:45.401793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.530 [2024-11-27 11:27:45.401802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:16.530 [2024-11-27 11:27:45.401814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:32:16.530 [2024-11-27 11:27:45.401824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.530 [2024-11-27 11:27:45.404417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.530 [2024-11-27 11:27:45.404461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:16.530 [2024-11-27 11:27:45.404476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:32:16.530 [2024-11-27 11:27:45.404487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.530 [2024-11-27 11:27:45.405845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:16.530 [2024-11-27 11:27:45.405980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:16.530 [2024-11-27 11:27:45.406086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:32:16.530 [2024-11-27 11:27:45.406118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.413558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.413743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:16.790 [2024-11-27 11:27:45.413801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.413824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.413926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.413953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:16.790 [2024-11-27 11:27:45.413974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.413994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.414087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.414114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:16.790 [2024-11-27 11:27:45.414137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.414208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.414246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.414269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:16.790 [2024-11-27 11:27:45.414318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.414343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.428614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.428803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:16.790 [2024-11-27 11:27:45.428822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.428831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.440201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.440367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:16.790 [2024-11-27 11:27:45.440424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.440447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.440523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.440545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:16.790 [2024-11-27 11:27:45.440566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.440586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.440633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.440655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:16.790 [2024-11-27 11:27:45.440675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.440733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.440812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.440841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:16.790 [2024-11-27 11:27:45.440861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.440881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.440940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.440973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:16.790 [2024-11-27 11:27:45.441174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.441216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.441277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.441315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:16.790 [2024-11-27 11:27:45.441489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.441565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.441637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.790 [2024-11-27 11:27:45.441733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:16.790 [2024-11-27 11:27:45.441788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.790 [2024-11-27 11:27:45.441813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.790 [2024-11-27 11:27:45.441982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.285 ms, result 0 00:32:16.790 00:32:16.790 00:32:16.790 11:27:45 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:19.332 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:19.332 11:27:47 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:19.332 [2024-11-27 11:27:47.984044] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:19.332 [2024-11-27 11:27:47.984230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94962 ] 00:32:19.332 [2024-11-27 11:27:48.141425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.332 [2024-11-27 11:27:48.197557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.592 [2024-11-27 11:27:48.312855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:19.592 [2024-11-27 11:27:48.313206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:19.854 [2024-11-27 11:27:48.474619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.474680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:19.854 [2024-11-27 11:27:48.474703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:19.854 [2024-11-27 11:27:48.474712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.474773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.474788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:19.854 [2024-11-27 11:27:48.474798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:19.854 [2024-11-27 11:27:48.474810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.474840] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:19.854 [2024-11-27 11:27:48.475152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:19.854 [2024-11-27 11:27:48.475174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.475184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:19.854 [2024-11-27 11:27:48.475199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:32:19.854 [2024-11-27 11:27:48.475210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.475498] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:19.854 [2024-11-27 11:27:48.475526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.475536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:19.854 [2024-11-27 11:27:48.475547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:19.854 [2024-11-27 11:27:48.475555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.475619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.475631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:19.854 [2024-11-27 11:27:48.475640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:19.854 [2024-11-27 11:27:48.475649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.475965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.475981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:19.854 [2024-11-27 11:27:48.475990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:32:19.854 [2024-11-27 11:27:48.475999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.476085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.476103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:19.854 [2024-11-27 11:27:48.476112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:32:19.854 [2024-11-27 11:27:48.476120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.476146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.476162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:19.854 [2024-11-27 11:27:48.476174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:19.854 [2024-11-27 11:27:48.476182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.476210] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:19.854 [2024-11-27 11:27:48.478434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.478482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:19.854 [2024-11-27 11:27:48.478497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:32:19.854 [2024-11-27 11:27:48.478511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.478550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.478561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:19.854 [2024-11-27 11:27:48.478570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:19.854 [2024-11-27 11:27:48.478580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.478634] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:19.854 [2024-11-27 11:27:48.478658] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:19.854 [2024-11-27 11:27:48.478704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:19.854 [2024-11-27 11:27:48.478729] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:19.854 [2024-11-27 11:27:48.478835] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:19.854 [2024-11-27 11:27:48.478848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:19.854 [2024-11-27 11:27:48.478860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:19.854 [2024-11-27 11:27:48.478872] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:19.854 [2024-11-27 11:27:48.478883] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:19.854 [2024-11-27 11:27:48.478924] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:19.854 [2024-11-27 11:27:48.478935] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:19.854 [2024-11-27 11:27:48.478945] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:19.854 [2024-11-27 11:27:48.478953] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:19.854 [2024-11-27 11:27:48.478962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.478974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:19.854 [2024-11-27 11:27:48.478986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:32:19.854 [2024-11-27 11:27:48.478993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.479084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.854 [2024-11-27 11:27:48.479094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:19.854 [2024-11-27 11:27:48.479106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:19.854 [2024-11-27 11:27:48.479116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.854 [2024-11-27 11:27:48.479230] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:19.854 [2024-11-27 11:27:48.479243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:19.854 [2024-11-27 11:27:48.479252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:19.854 [2024-11-27 11:27:48.479263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.854 [2024-11-27 11:27:48.479273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:19.854 [2024-11-27 11:27:48.479288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:19.854 [2024-11-27 11:27:48.479295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:19.854 [2024-11-27 11:27:48.479304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:19.854 [2024-11-27 11:27:48.479311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:19.854 [2024-11-27 11:27:48.479319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:19.854 [2024-11-27 11:27:48.479326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:19.854 [2024-11-27 11:27:48.479336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:19.855 [2024-11-27 11:27:48.479344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:19.855 [2024-11-27 11:27:48.479351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:19.855 [2024-11-27 11:27:48.479359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:19.855 [2024-11-27 11:27:48.479366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:19.855 [2024-11-27 11:27:48.479379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:19.855 [2024-11-27 11:27:48.479401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:19.855 [2024-11-27 11:27:48.479423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:19.855 [2024-11-27 11:27:48.479443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:19.855 [2024-11-27 11:27:48.479463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:19.855 [2024-11-27 11:27:48.479484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:19.855 [2024-11-27 11:27:48.479498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:19.855 [2024-11-27 11:27:48.479511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:19.855 [2024-11-27 11:27:48.479518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:19.855 [2024-11-27 11:27:48.479524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:19.855 [2024-11-27 11:27:48.479531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:19.855 [2024-11-27 11:27:48.479538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:19.855 [2024-11-27 11:27:48.479550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:19.855 [2024-11-27 11:27:48.479557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:19.855 [2024-11-27 11:27:48.479575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:19.855 [2024-11-27 11:27:48.479584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.855 [2024-11-27 11:27:48.479599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:19.855 [2024-11-27 11:27:48.479606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:19.855 [2024-11-27 11:27:48.479613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:19.855 [2024-11-27 11:27:48.479620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:19.855 [2024-11-27 11:27:48.479629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:19.855 [2024-11-27 11:27:48.479639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:19.855 [2024-11-27 11:27:48.479647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:19.855 [2024-11-27 11:27:48.479658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:19.855 [2024-11-27 11:27:48.479675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:19.855 [2024-11-27 11:27:48.479683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:19.855 [2024-11-27 11:27:48.479690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:19.855 [2024-11-27 11:27:48.479698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:19.855 [2024-11-27 11:27:48.479706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:19.855 [2024-11-27 11:27:48.479714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:19.855 [2024-11-27 11:27:48.479721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:19.855 [2024-11-27 11:27:48.479729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:19.855 [2024-11-27 11:27:48.479736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:19.855 [2024-11-27 11:27:48.479779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:19.855 [2024-11-27 11:27:48.479789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:19.855 [2024-11-27 11:27:48.479806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:19.855 [2024-11-27 11:27:48.479813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:19.855 [2024-11-27 11:27:48.479820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:19.855 [2024-11-27 11:27:48.479828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.855 [2024-11-27 11:27:48.479837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:19.855 [2024-11-27 11:27:48.479844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:32:19.855 [2024-11-27 11:27:48.479853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.855 [2024-11-27 11:27:48.500233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.855 [2024-11-27 11:27:48.500447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.855 [2024-11-27 11:27:48.500541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.017 ms 00:32:19.855 [2024-11-27 11:27:48.500567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.855 [2024-11-27 11:27:48.500697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.855 [2024-11-27 11:27:48.500732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:19.855 [2024-11-27 11:27:48.500755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:19.855 [2024-11-27 11:27:48.500777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.512759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.512945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.856 [2024-11-27 11:27:48.513019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.900 ms 00:32:19.856 [2024-11-27 11:27:48.513043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.513192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.513224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.856 [2024-11-27 11:27:48.513293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:19.856 [2024-11-27 11:27:48.513316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.513441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.513482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.856 [2024-11-27 11:27:48.513504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:19.856 [2024-11-27 11:27:48.513580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.513731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.513757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.856 [2024-11-27 11:27:48.513814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:32:19.856 [2024-11-27 11:27:48.513905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.521126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.521272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.856 [2024-11-27 11:27:48.521325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.177 ms 00:32:19.856 [2024-11-27 11:27:48.521356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.521491] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:19.856 [2024-11-27 11:27:48.521583] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:19.856 [2024-11-27 11:27:48.521619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.521647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:19.856 [2024-11-27 11:27:48.521671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:32:19.856 [2024-11-27 11:27:48.521716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.534050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.534108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:19.856 [2024-11-27 11:27:48.534119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.305 ms 00:32:19.856 [2024-11-27 11:27:48.534127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.534265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.534274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:19.856 [2024-11-27 11:27:48.534283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:32:19.856 [2024-11-27 11:27:48.534293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.534350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.534361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:19.856 [2024-11-27 11:27:48.534369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:19.856 [2024-11-27 11:27:48.534386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.534701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.534714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:19.856 [2024-11-27 11:27:48.534723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:19.856 [2024-11-27 11:27:48.534730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.534748] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:19.856 [2024-11-27 11:27:48.534759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.534775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:19.856 [2024-11-27 11:27:48.534783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:19.856 [2024-11-27 11:27:48.534794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.544317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:19.856 [2024-11-27 11:27:48.544474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.544486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:19.856 [2024-11-27 11:27:48.544498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.662 ms 00:32:19.856 [2024-11-27 11:27:48.544506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.547033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.547074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:19.856 [2024-11-27 11:27:48.547089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.501 ms 00:32:19.856 [2024-11-27 11:27:48.547098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.547201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.547212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:19.856 [2024-11-27 11:27:48.547222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:19.856 [2024-11-27 11:27:48.547231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.547255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.547269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:19.856 [2024-11-27 11:27:48.547277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:19.856 [2024-11-27 11:27:48.547285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.547317] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:19.856 [2024-11-27 11:27:48.547327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.547340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:19.856 [2024-11-27 11:27:48.547349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:19.856 [2024-11-27 11:27:48.547357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.554164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.554362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:19.856 [2024-11-27 11:27:48.554394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.785 ms 00:32:19.856 [2024-11-27 11:27:48.554404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.554492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.856 [2024-11-27 11:27:48.554510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:19.856 [2024-11-27 11:27:48.554520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:19.856 [2024-11-27 11:27:48.554529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.856 [2024-11-27 11:27:48.555796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.719 ms, result 0 00:32:20.802  [2024-11-27T11:27:50.628Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-27T11:27:52.001Z] Copying: 21/1024 [MB] (11 MBps) [2024-11-27T11:27:52.568Z] Copying: 49/1024 [MB] (27 MBps) [2024-11-27T11:27:53.955Z] Copying: 100/1024 [MB] (51 MBps) [2024-11-27T11:27:54.899Z] Copying: 114/1024 [MB] (14 MBps) [2024-11-27T11:27:55.845Z] Copying: 134/1024 [MB] (19 MBps) [2024-11-27T11:27:56.791Z] Copying: 148/1024 [MB] (13 MBps) [2024-11-27T11:27:57.734Z] Copying: 163/1024 [MB] (14 MBps) [2024-11-27T11:27:58.680Z] Copying: 175/1024 [MB] (11 MBps) [2024-11-27T11:27:59.623Z] Copying: 188/1024 [MB] (13 MBps) [2024-11-27T11:28:00.635Z] Copying: 200/1024 [MB] (12 MBps) [2024-11-27T11:28:01.581Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-27T11:28:02.968Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-27T11:28:03.912Z] Copying: 235/1024 [MB] (13 MBps) [2024-11-27T11:28:04.856Z] Copying: 245/1024 [MB] (10 MBps) [2024-11-27T11:28:05.802Z] Copying: 256/1024 [MB] (10 MBps) [2024-11-27T11:28:06.735Z] Copying: 266/1024 [MB] (10 MBps) [2024-11-27T11:28:07.671Z] Copying: 293/1024 [MB] (26 MBps) [2024-11-27T11:28:08.613Z] Copying: 332/1024 [MB] (38 MBps) [2024-11-27T11:28:09.997Z] Copying: 344/1024 [MB] (12 MBps) [2024-11-27T11:28:10.571Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-27T11:28:11.957Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-27T11:28:12.900Z] Copying: 385/1024 [MB] (17 MBps) [2024-11-27T11:28:13.843Z] Copying: 404/1024 [MB] (19 MBps) [2024-11-27T11:28:14.788Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-27T11:28:15.732Z] Copying: 429/1024 [MB] (13 MBps) [2024-11-27T11:28:16.677Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-27T11:28:17.619Z] Copying: 454/1024 [MB] (13 MBps) [2024-11-27T11:28:19.008Z] Copying: 465/1024 [MB] (11 MBps) [2024-11-27T11:28:19.580Z] Copying: 479/1024 [MB] (14 MBps) [2024-11-27T11:28:20.968Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-27T11:28:21.910Z] Copying: 506/1024 [MB] (16 MBps) [2024-11-27T11:28:22.859Z] Copying: 517/1024 [MB] (11 MBps) [2024-11-27T11:28:23.800Z] Copying: 532/1024 [MB] (14 MBps) [2024-11-27T11:28:24.744Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-27T11:28:25.682Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-27T11:28:26.625Z] Copying: 586/1024 [MB] (33 MBps) [2024-11-27T11:28:27.568Z] Copying: 598/1024 [MB] (12 MBps) [2024-11-27T11:28:28.947Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-27T11:28:29.617Z] Copying: 627/1024 [MB] (17 MBps) [2024-11-27T11:28:31.001Z] Copying: 647/1024 [MB] (20 MBps) [2024-11-27T11:28:31.571Z] Copying: 673/1024 [MB] (26 MBps) [2024-11-27T11:28:32.951Z] Copying: 694/1024 [MB] (20 MBps) [2024-11-27T11:28:33.889Z] Copying: 728/1024 [MB] (34 MBps) [2024-11-27T11:28:34.832Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-27T11:28:35.777Z] Copying: 759/1024 [MB] (12 MBps) [2024-11-27T11:28:36.716Z] Copying: 771/1024 [MB] (12 MBps) [2024-11-27T11:28:37.658Z] Copying: 797/1024 [MB] (26 MBps) [2024-11-27T11:28:38.598Z] Copying: 820/1024 [MB] (23 MBps) [2024-11-27T11:28:39.982Z] Copying: 838/1024 [MB] (17 MBps) [2024-11-27T11:28:40.923Z] Copying: 859/1024 [MB] (21 MBps) [2024-11-27T11:28:41.862Z] Copying: 875/1024 [MB] (16 MBps) [2024-11-27T11:28:42.797Z] Copying: 888/1024 [MB] (12 MBps) [2024-11-27T11:28:43.731Z] Copying: 908/1024 [MB] (20 MBps) [2024-11-27T11:28:44.665Z] Copying: 930/1024 [MB] (22 MBps) [2024-11-27T11:28:45.598Z] Copying: 961/1024 [MB] (31 MBps) [2024-11-27T11:28:46.975Z] Copying: 983/1024 [MB] (21 MBps) [2024-11-27T11:28:47.913Z] Copying: 1013/1024 [MB] (29 MBps) [2024-11-27T11:28:47.913Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-27T11:28:47.913Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-27 11:28:47.835097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.030 [2024-11-27 11:28:47.835164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:19.030 [2024-11-27 11:28:47.835177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:19.030 [2024-11-27 11:28:47.835184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.030 [2024-11-27 11:28:47.836867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:19.030 [2024-11-27 11:28:47.839058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.030 [2024-11-27 11:28:47.839086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:19.030 [2024-11-27 11:28:47.839098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:33:19.030 [2024-11-27 11:28:47.839105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.030 [2024-11-27 11:28:47.847046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.030 [2024-11-27 11:28:47.847073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:19.031 [2024-11-27 11:28:47.847084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.523 ms 00:33:19.031 [2024-11-27 11:28:47.847090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.031 [2024-11-27 11:28:47.847110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.031 [2024-11-27 11:28:47.847118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:19.031 [2024-11-27 11:28:47.847125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:19.031 [2024-11-27 11:28:47.847131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.031 [2024-11-27 11:28:47.847173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.031 [2024-11-27 11:28:47.847180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:19.031 [2024-11-27 11:28:47.847187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:19.031 [2024-11-27 11:28:47.847194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.031 [2024-11-27 11:28:47.847205] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:19.031 [2024-11-27 11:28:47.847214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:33:19.031 [2024-11-27 11:28:47.847223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:19.031 [2024-11-27 11:28:47.847691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:19.032 [2024-11-27 11:28:47.847823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:19.032 [2024-11-27 11:28:47.847830] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc5eda24-0ce9-482a-92cf-9e0de2879e0b 00:33:19.032 [2024-11-27 11:28:47.847838] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:33:19.032 [2024-11-27 11:28:47.847844] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130080 00:33:19.032 [2024-11-27 11:28:47.847849] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:33:19.032 [2024-11-27 11:28:47.847855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:19.032 [2024-11-27 11:28:47.847861] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:19.032 [2024-11-27 11:28:47.847867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:19.032 [2024-11-27 11:28:47.847874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:19.032 [2024-11-27 11:28:47.847879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:19.032 [2024-11-27 11:28:47.847884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:19.032 [2024-11-27 11:28:47.847904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.032 [2024-11-27 11:28:47.847910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:19.032 [2024-11-27 11:28:47.847916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:33:19.032 [2024-11-27 11:28:47.847922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.849140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.032 [2024-11-27 11:28:47.849158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:19.032 [2024-11-27 11:28:47.849165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:33:19.032 [2024-11-27 11:28:47.849172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.849243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.032 [2024-11-27 11:28:47.849250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:19.032 [2024-11-27 11:28:47.849257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:19.032 [2024-11-27 11:28:47.849263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.852896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.852916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:19.032 [2024-11-27 11:28:47.852927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.852933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.852969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.852976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:19.032 [2024-11-27 11:28:47.852982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.852987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.853010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.853017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:19.032 [2024-11-27 11:28:47.853023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.853030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.853041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.853047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:19.032 [2024-11-27 11:28:47.853053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.853059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.860245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.860281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:19.032 [2024-11-27 11:28:47.860289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.860298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.866660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.866788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:19.032 [2024-11-27 11:28:47.866800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.866806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.866841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.866848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:19.032 [2024-11-27 11:28:47.866855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.866861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.866883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.866904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:19.032 [2024-11-27 11:28:47.866911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.866917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.866957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.866964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:19.032 [2024-11-27 11:28:47.866970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.866976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.866993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.867007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:19.032 [2024-11-27 11:28:47.867013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.867018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.867045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.867052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:19.032 [2024-11-27 11:28:47.867057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.867063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.867095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:19.032 [2024-11-27 11:28:47.867103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:19.032 [2024-11-27 11:28:47.867109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:19.032 [2024-11-27 11:28:47.867115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.032 [2024-11-27 11:28:47.867206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.082 ms, result 0 00:33:19.978 00:33:19.978 00:33:19.978 11:28:48 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:19.978 [2024-11-27 11:28:48.748327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:19.978 [2024-11-27 11:28:48.748608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95569 ] 00:33:20.237 [2024-11-27 11:28:48.895525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.237 [2024-11-27 11:28:48.934077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.237 [2024-11-27 11:28:49.017251] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:20.237 [2024-11-27 11:28:49.017305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:20.500 [2024-11-27 11:28:49.159263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.500 [2024-11-27 11:28:49.159295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:20.500 [2024-11-27 11:28:49.159307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:20.500 [2024-11-27 11:28:49.159313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.500 [2024-11-27 11:28:49.159349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.500 [2024-11-27 11:28:49.159359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:20.500 [2024-11-27 11:28:49.159365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:20.500 [2024-11-27 11:28:49.159371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.500 [2024-11-27 11:28:49.159383] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:20.500 [2024-11-27 11:28:49.159557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:20.500 [2024-11-27 11:28:49.159567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.500 [2024-11-27 11:28:49.159575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:20.500 [2024-11-27 11:28:49.159581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:33:20.500 [2024-11-27 11:28:49.159589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.500 [2024-11-27 11:28:49.159760] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:20.500 [2024-11-27 11:28:49.159774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.500 [2024-11-27 11:28:49.159783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:20.500 [2024-11-27 11:28:49.159790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:20.500 [2024-11-27 11:28:49.159796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.500 [2024-11-27 11:28:49.159856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.500 [2024-11-27 11:28:49.159865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:20.500 [2024-11-27 11:28:49.159874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:20.500 [2024-11-27 11:28:49.159880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.500 [2024-11-27 11:28:49.160085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.500 [2024-11-27 11:28:49.160093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:20.500 [2024-11-27 11:28:49.160103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:33:20.500 [2024-11-27 11:28:49.160108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.500 [2024-11-27 11:28:49.160163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.160172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:20.501 [2024-11-27 11:28:49.160178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:20.501 [2024-11-27 11:28:49.160183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.160198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.160204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:20.501 [2024-11-27 11:28:49.160210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:20.501 [2024-11-27 11:28:49.160216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.160228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:20.501 [2024-11-27 11:28:49.161425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.161437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:20.501 [2024-11-27 11:28:49.161444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:33:20.501 [2024-11-27 11:28:49.161450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.161477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.161483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:20.501 [2024-11-27 11:28:49.161493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:20.501 [2024-11-27 11:28:49.161498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.161511] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:20.501 [2024-11-27 11:28:49.161525] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:20.501 [2024-11-27 11:28:49.161553] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:20.501 [2024-11-27 11:28:49.161567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:20.501 [2024-11-27 11:28:49.161646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:20.501 [2024-11-27 11:28:49.161654] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:20.501 [2024-11-27 11:28:49.161664] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:20.501 [2024-11-27 11:28:49.161672] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:20.501 [2024-11-27 11:28:49.161678] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:20.501 [2024-11-27 11:28:49.161687] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:20.501 [2024-11-27 11:28:49.161694] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:20.501 [2024-11-27 11:28:49.161699] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:20.501 [2024-11-27 11:28:49.161706] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:20.501 [2024-11-27 11:28:49.161712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.161718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:20.501 [2024-11-27 11:28:49.161723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:33:20.501 [2024-11-27 11:28:49.161729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.161790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.161796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:20.501 [2024-11-27 11:28:49.161802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:20.501 [2024-11-27 11:28:49.161809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.161879] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:20.501 [2024-11-27 11:28:49.161886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:20.501 [2024-11-27 11:28:49.161902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:20.501 [2024-11-27 11:28:49.161912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.161922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:20.501 [2024-11-27 11:28:49.161930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.161938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:20.501 [2024-11-27 11:28:49.161943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:20.501 [2024-11-27 11:28:49.161948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:20.501 [2024-11-27 11:28:49.162048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:20.501 [2024-11-27 11:28:49.162053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:20.501 [2024-11-27 11:28:49.162058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:20.501 [2024-11-27 11:28:49.162064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:20.501 [2024-11-27 11:28:49.162069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:20.501 [2024-11-27 11:28:49.162074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:20.501 [2024-11-27 11:28:49.162086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:20.501 [2024-11-27 11:28:49.162103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:20.501 [2024-11-27 11:28:49.162122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:20.501 [2024-11-27 11:28:49.162139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:20.501 [2024-11-27 11:28:49.162156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:20.501 [2024-11-27 11:28:49.162175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:20.501 [2024-11-27 11:28:49.162186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:20.501 [2024-11-27 11:28:49.162192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:20.501 [2024-11-27 11:28:49.162198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:20.501 [2024-11-27 11:28:49.162203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:20.501 [2024-11-27 11:28:49.162210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:20.501 [2024-11-27 11:28:49.162216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:20.501 [2024-11-27 11:28:49.162228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:20.501 [2024-11-27 11:28:49.162233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162239] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:20.501 [2024-11-27 11:28:49.162246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:20.501 [2024-11-27 11:28:49.162252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:20.501 [2024-11-27 11:28:49.162264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:20.501 [2024-11-27 11:28:49.162270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:20.501 [2024-11-27 11:28:49.162275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:20.501 [2024-11-27 11:28:49.162281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:20.501 [2024-11-27 11:28:49.162287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:20.501 [2024-11-27 11:28:49.162293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:20.501 [2024-11-27 11:28:49.162299] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:20.501 [2024-11-27 11:28:49.162310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:20.501 [2024-11-27 11:28:49.162323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:20.501 [2024-11-27 11:28:49.162329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:20.501 [2024-11-27 11:28:49.162335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:20.501 [2024-11-27 11:28:49.162342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:20.501 [2024-11-27 11:28:49.162348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:20.501 [2024-11-27 11:28:49.162354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:20.501 [2024-11-27 11:28:49.162360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:20.501 [2024-11-27 11:28:49.162365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:20.501 [2024-11-27 11:28:49.162372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:20.501 [2024-11-27 11:28:49.162403] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:20.501 [2024-11-27 11:28:49.162411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:20.501 [2024-11-27 11:28:49.162425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:20.501 [2024-11-27 11:28:49.162432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:20.501 [2024-11-27 11:28:49.162438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:20.501 [2024-11-27 11:28:49.162444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.162450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:20.501 [2024-11-27 11:28:49.162457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:33:20.501 [2024-11-27 11:28:49.162462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.178266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.178330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:20.501 [2024-11-27 11:28:49.178358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.768 ms 00:33:20.501 [2024-11-27 11:28:49.178373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.178548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.178585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:20.501 [2024-11-27 11:28:49.178602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:33:20.501 [2024-11-27 11:28:49.178619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.190190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.190217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:20.501 [2024-11-27 11:28:49.190227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.463 ms 00:33:20.501 [2024-11-27 11:28:49.190236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.190264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.190270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:20.501 [2024-11-27 11:28:49.190277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:20.501 [2024-11-27 11:28:49.190283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.190345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.190353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:20.501 [2024-11-27 11:28:49.190359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:20.501 [2024-11-27 11:28:49.190366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.190454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.190460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:20.501 [2024-11-27 11:28:49.190466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:20.501 [2024-11-27 11:28:49.190472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.194391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.194497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:20.501 [2024-11-27 11:28:49.194517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.906 ms 00:33:20.501 [2024-11-27 11:28:49.194525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.194604] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:20.501 [2024-11-27 11:28:49.194614] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:20.501 [2024-11-27 11:28:49.194621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.194626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:20.501 [2024-11-27 11:28:49.194639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:20.501 [2024-11-27 11:28:49.194644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.203768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.203792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:20.501 [2024-11-27 11:28:49.203799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:33:20.501 [2024-11-27 11:28:49.203805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.203887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.203905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:20.501 [2024-11-27 11:28:49.203913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:20.501 [2024-11-27 11:28:49.203922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.203956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.203963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:20.501 [2024-11-27 11:28:49.203969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:20.501 [2024-11-27 11:28:49.203978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.204198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.204213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:20.501 [2024-11-27 11:28:49.204219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:33:20.501 [2024-11-27 11:28:49.204224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.204234] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:20.501 [2024-11-27 11:28:49.204241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.204247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:20.501 [2024-11-27 11:28:49.204255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:20.501 [2024-11-27 11:28:49.204261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.210447] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:20.501 [2024-11-27 11:28:49.210536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.210543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:20.501 [2024-11-27 11:28:49.210549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:33:20.501 [2024-11-27 11:28:49.210555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.212323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.212342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:20.501 [2024-11-27 11:28:49.212350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:33:20.501 [2024-11-27 11:28:49.212356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.212393] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:20.501 [2024-11-27 11:28:49.212816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.212831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:20.501 [2024-11-27 11:28:49.212842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:33:20.501 [2024-11-27 11:28:49.212847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.501 [2024-11-27 11:28:49.212867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.501 [2024-11-27 11:28:49.212876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:20.502 [2024-11-27 11:28:49.212882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:20.502 [2024-11-27 11:28:49.212887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.502 [2024-11-27 11:28:49.212931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:20.502 [2024-11-27 11:28:49.212938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.502 [2024-11-27 11:28:49.212944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:20.502 [2024-11-27 11:28:49.212950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:20.502 [2024-11-27 11:28:49.212957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.502 [2024-11-27 11:28:49.215920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.502 [2024-11-27 11:28:49.215947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:20.502 [2024-11-27 11:28:49.215957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:33:20.502 [2024-11-27 11:28:49.215964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.502 [2024-11-27 11:28:49.216017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.502 [2024-11-27 11:28:49.216025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:20.502 [2024-11-27 11:28:49.216031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:20.502 [2024-11-27 11:28:49.216037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.502 [2024-11-27 11:28:49.216810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 57.243 ms, result 0 00:33:21.889  [2024-11-27T11:28:51.718Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-27T11:28:52.662Z] Copying: 37/1024 [MB] (19 MBps) [2024-11-27T11:28:53.606Z] Copying: 57/1024 [MB] (20 MBps) [2024-11-27T11:28:54.551Z] Copying: 77/1024 [MB] (19 MBps) [2024-11-27T11:28:55.493Z] Copying: 100/1024 [MB] (23 MBps) [2024-11-27T11:28:56.437Z] Copying: 121/1024 [MB] (21 MBps) [2024-11-27T11:28:57.383Z] Copying: 144/1024 [MB] (22 MBps) [2024-11-27T11:28:58.382Z] Copying: 163/1024 [MB] (18 MBps) [2024-11-27T11:28:59.778Z] Copying: 179/1024 [MB] (15 MBps) [2024-11-27T11:29:00.724Z] Copying: 197/1024 [MB] (18 MBps) [2024-11-27T11:29:01.668Z] Copying: 209/1024 [MB] (11 MBps) [2024-11-27T11:29:02.614Z] Copying: 220/1024 [MB] (11 MBps) [2024-11-27T11:29:03.560Z] Copying: 231/1024 [MB] (10 MBps) [2024-11-27T11:29:04.504Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-27T11:29:05.449Z] Copying: 253/1024 [MB] (11 MBps) [2024-11-27T11:29:06.392Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-27T11:29:07.776Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-27T11:29:08.720Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-27T11:29:09.662Z] Copying: 295/1024 [MB] (10 MBps) [2024-11-27T11:29:10.607Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-27T11:29:11.552Z] Copying: 316/1024 [MB] (10 MBps) [2024-11-27T11:29:12.497Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-27T11:29:13.441Z] Copying: 338/1024 [MB] (10 MBps) [2024-11-27T11:29:14.386Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-27T11:29:15.776Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-27T11:29:16.721Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-27T11:29:17.667Z] Copying: 381/1024 [MB] (10 MBps) [2024-11-27T11:29:18.611Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-27T11:29:19.553Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-27T11:29:20.497Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-27T11:29:21.444Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-27T11:29:22.388Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-27T11:29:23.452Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-27T11:29:24.400Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-27T11:29:25.788Z] Copying: 467/1024 [MB] (13 MBps) [2024-11-27T11:29:26.359Z] Copying: 487/1024 [MB] (19 MBps) [2024-11-27T11:29:27.746Z] Copying: 499/1024 [MB] (12 MBps) [2024-11-27T11:29:28.689Z] Copying: 513/1024 [MB] (14 MBps) [2024-11-27T11:29:29.630Z] Copying: 525/1024 [MB] (11 MBps) [2024-11-27T11:29:30.575Z] Copying: 539/1024 [MB] (14 MBps) [2024-11-27T11:29:31.519Z] Copying: 554/1024 [MB] (15 MBps) [2024-11-27T11:29:32.461Z] Copying: 574/1024 [MB] (19 MBps) [2024-11-27T11:29:33.406Z] Copying: 590/1024 [MB] (15 MBps) [2024-11-27T11:29:34.793Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-27T11:29:35.364Z] Copying: 623/1024 [MB] (21 MBps) [2024-11-27T11:29:36.751Z] Copying: 637/1024 [MB] (13 MBps) [2024-11-27T11:29:37.695Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-27T11:29:38.640Z] Copying: 658/1024 [MB] (10 MBps) [2024-11-27T11:29:39.583Z] Copying: 671/1024 [MB] (12 MBps) [2024-11-27T11:29:40.526Z] Copying: 685/1024 [MB] (13 MBps) [2024-11-27T11:29:41.468Z] Copying: 701/1024 [MB] (16 MBps) [2024-11-27T11:29:42.409Z] Copying: 714/1024 [MB] (12 MBps) [2024-11-27T11:29:43.795Z] Copying: 725/1024 [MB] (11 MBps) [2024-11-27T11:29:44.369Z] Copying: 753/1024 [MB] (27 MBps) [2024-11-27T11:29:45.755Z] Copying: 769/1024 [MB] (16 MBps) [2024-11-27T11:29:46.698Z] Copying: 790/1024 [MB] (20 MBps) [2024-11-27T11:29:47.640Z] Copying: 807/1024 [MB] (17 MBps) [2024-11-27T11:29:48.582Z] Copying: 822/1024 [MB] (14 MBps) [2024-11-27T11:29:49.524Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-27T11:29:50.467Z] Copying: 858/1024 [MB] (18 MBps) [2024-11-27T11:29:51.411Z] Copying: 877/1024 [MB] (19 MBps) [2024-11-27T11:29:52.795Z] Copying: 891/1024 [MB] (14 MBps) [2024-11-27T11:29:53.369Z] Copying: 907/1024 [MB] (15 MBps) [2024-11-27T11:29:54.760Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-27T11:29:55.705Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-27T11:29:56.652Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-27T11:29:57.598Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-27T11:29:58.543Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-27T11:29:59.490Z] Copying: 971/1024 [MB] (10 MBps) [2024-11-27T11:30:00.434Z] Copying: 981/1024 [MB] (10 MBps) [2024-11-27T11:30:01.376Z] Copying: 992/1024 [MB] (10 MBps) [2024-11-27T11:30:02.392Z] Copying: 1003/1024 [MB] (10 MBps) [2024-11-27T11:30:03.351Z] Copying: 1013/1024 [MB] (10 MBps) [2024-11-27T11:30:03.615Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-27 11:30:03.423832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.732 [2024-11-27 11:30:03.423958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:34.732 [2024-11-27 11:30:03.423993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:34.732 [2024-11-27 11:30:03.424009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.732 [2024-11-27 11:30:03.424048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:34.732 [2024-11-27 11:30:03.425446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.732 [2024-11-27 11:30:03.425499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:34.732 [2024-11-27 11:30:03.425517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:34:34.732 [2024-11-27 11:30:03.425531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.732 [2024-11-27 11:30:03.426001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.732 [2024-11-27 11:30:03.426022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:34.732 [2024-11-27 11:30:03.426044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:34:34.732 [2024-11-27 11:30:03.426065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.732 [2024-11-27 11:30:03.426120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.732 [2024-11-27 11:30:03.426135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:34.732 [2024-11-27 11:30:03.426150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:34.732 [2024-11-27 11:30:03.426163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.732 [2024-11-27 11:30:03.426253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.732 [2024-11-27 11:30:03.426269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:34.732 [2024-11-27 11:30:03.426284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:34.732 [2024-11-27 11:30:03.426302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.732 [2024-11-27 11:30:03.426327] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:34.732 [2024-11-27 11:30:03.426348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:34.732 [2024-11-27 11:30:03.426365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:34.732 [2024-11-27 11:30:03.426556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.426995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:34.733 [2024-11-27 11:30:03.427763] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:34.733 [2024-11-27 11:30:03.427776] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc5eda24-0ce9-482a-92cf-9e0de2879e0b 00:34:34.733 [2024-11-27 11:30:03.427796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:34.733 [2024-11-27 11:30:03.427815] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1056 00:34:34.733 [2024-11-27 11:30:03.427827] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:34:34.733 [2024-11-27 11:30:03.427841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0312 00:34:34.734 [2024-11-27 11:30:03.427854] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:34.734 [2024-11-27 11:30:03.427867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:34.734 [2024-11-27 11:30:03.427901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:34.734 [2024-11-27 11:30:03.427913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:34.734 [2024-11-27 11:30:03.427924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:34.734 [2024-11-27 11:30:03.427938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.734 [2024-11-27 11:30:03.427951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:34.734 [2024-11-27 11:30:03.427966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:34:34.734 [2024-11-27 11:30:03.427985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.430659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.734 [2024-11-27 11:30:03.430705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:34.734 [2024-11-27 11:30:03.430718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:34:34.734 [2024-11-27 11:30:03.430727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.430860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:34.734 [2024-11-27 11:30:03.430868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:34.734 [2024-11-27 11:30:03.430877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:34:34.734 [2024-11-27 11:30:03.430885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.437882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.437955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:34.734 [2024-11-27 11:30:03.437967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.438000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.438074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.438083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:34.734 [2024-11-27 11:30:03.438092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.438102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.438166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.438177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:34.734 [2024-11-27 11:30:03.438186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.438194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.438215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.438223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:34.734 [2024-11-27 11:30:03.438231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.438239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.451860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.451918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:34.734 [2024-11-27 11:30:03.451929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.451943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.462794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.462843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:34.734 [2024-11-27 11:30:03.462854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.462862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.462930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.462941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:34.734 [2024-11-27 11:30:03.462950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.462959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.463009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.463019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:34.734 [2024-11-27 11:30:03.463027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.463035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.463087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.463097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:34.734 [2024-11-27 11:30:03.463110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.463118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.463145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.463163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:34.734 [2024-11-27 11:30:03.463171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.463178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.463220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.463230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:34.734 [2024-11-27 11:30:03.463238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.463246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.463292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:34.734 [2024-11-27 11:30:03.463302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:34.734 [2024-11-27 11:30:03.463311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:34.734 [2024-11-27 11:30:03.463318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:34.734 [2024-11-27 11:30:03.463451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.602 ms, result 0 00:34:34.996 00:34:34.996 00:34:34.996 11:30:03 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:37.545 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:37.545 11:30:05 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:37.545 11:30:05 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:37.545 11:30:05 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:37.545 Process with pid 93441 is not found 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93441 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93441 ']' 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93441 00:34:37.545 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93441) - No such process 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93441 is not found' 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:37.545 Remove shared memory files 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_band_md /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_l2p_l1 /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_l2p_l2 /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_l2p_l2_ctx /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_nvc_md /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_p2l_pool /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_sb /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_sb_shm /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_trim_bitmap /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_trim_log /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_trim_md /dev/hugepages/ftl_bc5eda24-0ce9-482a-92cf-9e0de2879e0b_vmap 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:37.545 00:34:37.545 real 4m44.858s 00:34:37.545 user 4m32.078s 00:34:37.545 sys 0m12.384s 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:37.545 ************************************ 00:34:37.545 END TEST ftl_restore_fast 00:34:37.545 ************************************ 00:34:37.545 11:30:06 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:37.545 11:30:06 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:37.545 11:30:06 ftl -- ftl/ftl.sh@14 -- # killprocess 84187 00:34:37.545 11:30:06 ftl -- common/autotest_common.sh@950 -- # '[' -z 84187 ']' 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@954 -- # kill -0 84187 00:34:37.546 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84187) - No such process 00:34:37.546 Process with pid 84187 is not found 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84187 is not found' 00:34:37.546 11:30:06 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:37.546 11:30:06 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96376 00:34:37.546 11:30:06 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96376 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@831 -- # '[' -z 96376 ']' 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:37.546 11:30:06 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:37.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:37.546 11:30:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:37.546 [2024-11-27 11:30:06.209323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:37.546 [2024-11-27 11:30:06.209553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96376 ] 00:34:37.546 [2024-11-27 11:30:06.359197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.546 [2024-11-27 11:30:06.410993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.492 11:30:07 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:38.492 11:30:07 ftl -- common/autotest_common.sh@864 -- # return 0 00:34:38.492 11:30:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:38.492 nvme0n1 00:34:38.492 11:30:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:38.492 11:30:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:38.492 11:30:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:38.754 11:30:07 ftl -- ftl/common.sh@28 -- # stores=ac950b67-84b9-44ba-bf31-b0998524143d 00:34:38.754 11:30:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:38.754 11:30:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac950b67-84b9-44ba-bf31-b0998524143d 00:34:39.015 11:30:07 ftl -- ftl/ftl.sh@23 -- # killprocess 96376 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@950 -- # '[' -z 96376 ']' 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@954 -- # kill -0 96376 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@955 -- # uname 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96376 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:39.015 killing process with pid 96376 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96376' 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@969 -- # kill 96376 00:34:39.015 11:30:07 ftl -- common/autotest_common.sh@974 -- # wait 96376 00:34:39.276 11:30:08 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:39.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:39.539 Waiting for block devices as requested 00:34:39.539 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:39.800 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:39.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:39.800 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:45.088 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:45.088 11:30:13 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:45.088 Remove shared memory files 00:34:45.088 11:30:13 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:45.088 11:30:13 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:45.088 11:30:13 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:45.088 11:30:13 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:45.088 11:30:13 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:45.088 11:30:13 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:45.088 ************************************ 00:34:45.088 END TEST ftl 00:34:45.088 ************************************ 00:34:45.088 00:34:45.088 real 18m19.625s 00:34:45.088 user 20m19.361s 00:34:45.088 sys 1m26.880s 00:34:45.088 11:30:13 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.088 11:30:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:45.088 11:30:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:45.088 11:30:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:45.088 11:30:13 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:45.088 11:30:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:45.088 11:30:13 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:45.088 11:30:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:45.088 11:30:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:45.088 11:30:13 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:45.088 11:30:13 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:45.088 11:30:13 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:45.088 11:30:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:45.088 11:30:13 -- common/autotest_common.sh@10 -- # set +x 00:34:45.088 11:30:13 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:45.088 11:30:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:45.088 11:30:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:45.088 11:30:13 -- common/autotest_common.sh@10 -- # set +x 00:34:46.475 INFO: APP EXITING 00:34:46.475 INFO: killing all VMs 00:34:46.475 INFO: killing vhost app 00:34:46.475 INFO: EXIT DONE 00:34:46.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:47.309 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:47.309 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:47.309 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:47.309 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:47.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:47.833 Cleaning 00:34:47.833 Removing: /var/run/dpdk/spdk0/config 00:34:47.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:47.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:47.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:47.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:47.833 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:47.833 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:47.833 Removing: /var/run/dpdk/spdk0 00:34:47.833 Removing: /var/run/dpdk/spdk_pid69664 00:34:47.833 Removing: /var/run/dpdk/spdk_pid69828 00:34:47.834 Removing: /var/run/dpdk/spdk_pid70029 00:34:47.834 Removing: /var/run/dpdk/spdk_pid70117 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70145 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70257 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70269 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70452 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70525 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70605 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70705 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70780 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70819 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70856 00:34:48.095 Removing: /var/run/dpdk/spdk_pid70921 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71027 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71446 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71494 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71535 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71551 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71609 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71625 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71683 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71699 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71741 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71759 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71801 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71819 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71946 00:34:48.095 Removing: /var/run/dpdk/spdk_pid71987 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72066 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72227 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72300 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72330 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72742 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72842 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72940 00:34:48.095 Removing: /var/run/dpdk/spdk_pid72982 00:34:48.095 Removing: /var/run/dpdk/spdk_pid73002 00:34:48.095 Removing: /var/run/dpdk/spdk_pid73086 00:34:48.095 Removing: /var/run/dpdk/spdk_pid73694 00:34:48.095 Removing: /var/run/dpdk/spdk_pid73725 00:34:48.095 Removing: /var/run/dpdk/spdk_pid74172 00:34:48.095 Removing: /var/run/dpdk/spdk_pid74259 00:34:48.095 Removing: /var/run/dpdk/spdk_pid74363 00:34:48.095 Removing: /var/run/dpdk/spdk_pid74400 00:34:48.095 Removing: /var/run/dpdk/spdk_pid74425 00:34:48.095 Removing: /var/run/dpdk/spdk_pid74445 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76270 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76385 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76389 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76406 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76452 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76456 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76468 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76513 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76517 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76529 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76568 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76572 00:34:48.095 Removing: /var/run/dpdk/spdk_pid76584 00:34:48.095 Removing: /var/run/dpdk/spdk_pid77952 00:34:48.095 Removing: /var/run/dpdk/spdk_pid78038 00:34:48.095 Removing: /var/run/dpdk/spdk_pid79433 00:34:48.095 Removing: /var/run/dpdk/spdk_pid80793 00:34:48.095 Removing: /var/run/dpdk/spdk_pid80858 00:34:48.095 Removing: /var/run/dpdk/spdk_pid80912 00:34:48.095 Removing: /var/run/dpdk/spdk_pid80961 00:34:48.095 Removing: /var/run/dpdk/spdk_pid81038 00:34:48.095 Removing: /var/run/dpdk/spdk_pid81101 00:34:48.095 Removing: /var/run/dpdk/spdk_pid81243 00:34:48.095 Removing: /var/run/dpdk/spdk_pid81585 00:34:48.095 Removing: /var/run/dpdk/spdk_pid81616 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82058 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82239 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82324 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82424 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82469 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82489 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82792 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82830 00:34:48.095 Removing: /var/run/dpdk/spdk_pid82887 00:34:48.095 Removing: /var/run/dpdk/spdk_pid83249 00:34:48.095 Removing: /var/run/dpdk/spdk_pid83394 00:34:48.095 Removing: /var/run/dpdk/spdk_pid84187 00:34:48.095 Removing: /var/run/dpdk/spdk_pid84308 00:34:48.095 Removing: /var/run/dpdk/spdk_pid84465 00:34:48.095 Removing: /var/run/dpdk/spdk_pid84557 00:34:48.095 Removing: /var/run/dpdk/spdk_pid84848 00:34:48.095 Removing: /var/run/dpdk/spdk_pid85123 00:34:48.095 Removing: /var/run/dpdk/spdk_pid85465 00:34:48.095 Removing: /var/run/dpdk/spdk_pid85619 00:34:48.095 Removing: /var/run/dpdk/spdk_pid85771 00:34:48.095 Removing: /var/run/dpdk/spdk_pid85813 00:34:48.095 Removing: /var/run/dpdk/spdk_pid86004 00:34:48.095 Removing: /var/run/dpdk/spdk_pid86018 00:34:48.095 Removing: /var/run/dpdk/spdk_pid86059 00:34:48.095 Removing: /var/run/dpdk/spdk_pid86322 00:34:48.095 Removing: /var/run/dpdk/spdk_pid86552 00:34:48.095 Removing: /var/run/dpdk/spdk_pid87361 00:34:48.095 Removing: /var/run/dpdk/spdk_pid88066 00:34:48.357 Removing: /var/run/dpdk/spdk_pid88790 00:34:48.357 Removing: /var/run/dpdk/spdk_pid89643 00:34:48.357 Removing: /var/run/dpdk/spdk_pid89785 00:34:48.357 Removing: /var/run/dpdk/spdk_pid89862 00:34:48.357 Removing: /var/run/dpdk/spdk_pid90491 00:34:48.357 Removing: /var/run/dpdk/spdk_pid90548 00:34:48.357 Removing: /var/run/dpdk/spdk_pid91292 00:34:48.357 Removing: /var/run/dpdk/spdk_pid91700 00:34:48.357 Removing: /var/run/dpdk/spdk_pid92481 00:34:48.357 Removing: /var/run/dpdk/spdk_pid92604 00:34:48.357 Removing: /var/run/dpdk/spdk_pid92639 00:34:48.357 Removing: /var/run/dpdk/spdk_pid92693 00:34:48.357 Removing: /var/run/dpdk/spdk_pid92751 00:34:48.357 Removing: /var/run/dpdk/spdk_pid92799 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93017 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93087 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93154 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93210 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93243 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93300 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93441 00:34:48.357 Removing: /var/run/dpdk/spdk_pid93646 00:34:48.357 Removing: /var/run/dpdk/spdk_pid94243 00:34:48.357 Removing: /var/run/dpdk/spdk_pid94962 00:34:48.357 Removing: /var/run/dpdk/spdk_pid95569 00:34:48.357 Removing: /var/run/dpdk/spdk_pid96376 00:34:48.357 Clean 00:34:48.357 11:30:17 -- common/autotest_common.sh@1451 -- # return 0 00:34:48.357 11:30:17 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:48.357 11:30:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.357 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.357 11:30:17 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:48.357 11:30:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.357 11:30:17 -- common/autotest_common.sh@10 -- # set +x 00:34:48.357 11:30:17 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:48.357 11:30:17 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:48.357 11:30:17 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:48.357 11:30:17 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:48.357 11:30:17 -- spdk/autotest.sh@394 -- # hostname 00:34:48.357 11:30:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:48.618 geninfo: WARNING: invalid characters removed from testname! 00:35:15.208 11:30:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:16.590 11:30:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:19.135 11:30:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:21.050 11:30:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:22.959 11:30:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:25.501 11:30:54 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:27.437 11:30:56 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:27.437 11:30:56 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:35:27.437 11:30:56 -- common/autotest_common.sh@1681 -- $ lcov --version 00:35:27.437 11:30:56 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:35:27.437 11:30:56 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:35:27.437 11:30:56 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:35:27.437 11:30:56 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:35:27.437 11:30:56 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:35:27.437 11:30:56 -- scripts/common.sh@336 -- $ IFS=.-: 00:35:27.437 11:30:56 -- scripts/common.sh@336 -- $ read -ra ver1 00:35:27.437 11:30:56 -- scripts/common.sh@337 -- $ IFS=.-: 00:35:27.437 11:30:56 -- scripts/common.sh@337 -- $ read -ra ver2 00:35:27.437 11:30:56 -- scripts/common.sh@338 -- $ local 'op=<' 00:35:27.437 11:30:56 -- scripts/common.sh@340 -- $ ver1_l=2 00:35:27.437 11:30:56 -- scripts/common.sh@341 -- $ ver2_l=1 00:35:27.437 11:30:56 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:35:27.437 11:30:56 -- scripts/common.sh@344 -- $ case "$op" in 00:35:27.437 11:30:56 -- scripts/common.sh@345 -- $ : 1 00:35:27.437 11:30:56 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:35:27.437 11:30:56 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:27.437 11:30:56 -- scripts/common.sh@365 -- $ decimal 1 00:35:27.437 11:30:56 -- scripts/common.sh@353 -- $ local d=1 00:35:27.437 11:30:56 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:35:27.437 11:30:56 -- scripts/common.sh@355 -- $ echo 1 00:35:27.437 11:30:56 -- scripts/common.sh@365 -- $ ver1[v]=1 00:35:27.708 11:30:56 -- scripts/common.sh@366 -- $ decimal 2 00:35:27.708 11:30:56 -- scripts/common.sh@353 -- $ local d=2 00:35:27.708 11:30:56 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:35:27.708 11:30:56 -- scripts/common.sh@355 -- $ echo 2 00:35:27.708 11:30:56 -- scripts/common.sh@366 -- $ ver2[v]=2 00:35:27.708 11:30:56 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:35:27.708 11:30:56 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:35:27.708 11:30:56 -- scripts/common.sh@368 -- $ return 0 00:35:27.708 11:30:56 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:27.708 11:30:56 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:35:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.708 --rc genhtml_branch_coverage=1 00:35:27.708 --rc genhtml_function_coverage=1 00:35:27.708 --rc genhtml_legend=1 00:35:27.708 --rc geninfo_all_blocks=1 00:35:27.708 --rc geninfo_unexecuted_blocks=1 00:35:27.708 00:35:27.708 ' 00:35:27.708 11:30:56 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:35:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.708 --rc genhtml_branch_coverage=1 00:35:27.708 --rc genhtml_function_coverage=1 00:35:27.708 --rc genhtml_legend=1 00:35:27.708 --rc geninfo_all_blocks=1 00:35:27.708 --rc geninfo_unexecuted_blocks=1 00:35:27.708 00:35:27.708 ' 00:35:27.708 11:30:56 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:35:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.708 --rc genhtml_branch_coverage=1 00:35:27.708 --rc genhtml_function_coverage=1 00:35:27.708 --rc genhtml_legend=1 00:35:27.708 --rc geninfo_all_blocks=1 00:35:27.708 --rc geninfo_unexecuted_blocks=1 00:35:27.708 00:35:27.708 ' 00:35:27.708 11:30:56 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:35:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.708 --rc genhtml_branch_coverage=1 00:35:27.708 --rc genhtml_function_coverage=1 00:35:27.708 --rc genhtml_legend=1 00:35:27.708 --rc geninfo_all_blocks=1 00:35:27.708 --rc geninfo_unexecuted_blocks=1 00:35:27.708 00:35:27.708 ' 00:35:27.708 11:30:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:27.708 11:30:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:35:27.708 11:30:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:27.708 11:30:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:27.708 11:30:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:27.708 11:30:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.708 11:30:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.708 11:30:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.708 11:30:56 -- paths/export.sh@5 -- $ export PATH 00:35:27.708 11:30:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.708 11:30:56 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:27.708 11:30:56 -- common/autobuild_common.sh@479 -- $ date +%s 00:35:27.708 11:30:56 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732707056.XXXXXX 00:35:27.708 11:30:56 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732707056.Ona8Ge 00:35:27.709 11:30:56 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:35:27.709 11:30:56 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:35:27.709 11:30:56 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:35:27.709 11:30:56 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:35:27.709 11:30:56 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:27.709 11:30:56 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:27.709 11:30:56 -- common/autobuild_common.sh@495 -- $ get_config_params 00:35:27.709 11:30:56 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:35:27.709 11:30:56 -- common/autotest_common.sh@10 -- $ set +x 00:35:27.709 11:30:56 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:35:27.709 11:30:56 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:35:27.709 11:30:56 -- pm/common@17 -- $ local monitor 00:35:27.709 11:30:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:27.709 11:30:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:27.709 11:30:56 -- pm/common@25 -- $ sleep 1 00:35:27.709 11:30:56 -- pm/common@21 -- $ date +%s 00:35:27.709 11:30:56 -- pm/common@21 -- $ date +%s 00:35:27.709 11:30:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732707056 00:35:27.709 11:30:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732707056 00:35:27.709 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732707056_collect-cpu-load.pm.log 00:35:27.709 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732707056_collect-vmstat.pm.log 00:35:28.643 11:30:57 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:35:28.643 11:30:57 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:35:28.643 11:30:57 -- spdk/autopackage.sh@14 -- $ timing_finish 00:35:28.643 11:30:57 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:28.643 11:30:57 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:28.643 11:30:57 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:28.643 11:30:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:28.643 11:30:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:28.643 11:30:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:28.643 11:30:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:28.643 11:30:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:28.643 11:30:57 -- pm/common@44 -- $ pid=98120 00:35:28.643 11:30:57 -- pm/common@50 -- $ kill -TERM 98120 00:35:28.643 11:30:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:28.643 11:30:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:28.643 11:30:57 -- pm/common@44 -- $ pid=98121 00:35:28.643 11:30:57 -- pm/common@50 -- $ kill -TERM 98121 00:35:28.643 + [[ -n 5767 ]] 00:35:28.643 + sudo kill 5767 00:35:28.651 [Pipeline] } 00:35:28.666 [Pipeline] // timeout 00:35:28.672 [Pipeline] } 00:35:28.687 [Pipeline] // stage 00:35:28.693 [Pipeline] } 00:35:28.708 [Pipeline] // catchError 00:35:28.719 [Pipeline] stage 00:35:28.721 [Pipeline] { (Stop VM) 00:35:28.737 [Pipeline] sh 00:35:29.022 + vagrant halt 00:35:31.559 ==> default: Halting domain... 00:35:36.864 [Pipeline] sh 00:35:37.149 + vagrant destroy -f 00:35:39.692 ==> default: Removing domain... 00:35:40.641 [Pipeline] sh 00:35:40.923 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:40.933 [Pipeline] } 00:35:40.948 [Pipeline] // stage 00:35:40.954 [Pipeline] } 00:35:40.970 [Pipeline] // dir 00:35:40.977 [Pipeline] } 00:35:40.994 [Pipeline] // wrap 00:35:41.000 [Pipeline] } 00:35:41.014 [Pipeline] // catchError 00:35:41.025 [Pipeline] stage 00:35:41.027 [Pipeline] { (Epilogue) 00:35:41.041 [Pipeline] sh 00:35:41.327 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:46.652 [Pipeline] catchError 00:35:46.654 [Pipeline] { 00:35:46.669 [Pipeline] sh 00:35:46.955 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:46.955 Artifacts sizes are good 00:35:46.967 [Pipeline] } 00:35:46.985 [Pipeline] // catchError 00:35:46.999 [Pipeline] archiveArtifacts 00:35:47.008 Archiving artifacts 00:35:47.120 [Pipeline] cleanWs 00:35:47.133 [WS-CLEANUP] Deleting project workspace... 00:35:47.133 [WS-CLEANUP] Deferred wipeout is used... 00:35:47.141 [WS-CLEANUP] done 00:35:47.143 [Pipeline] } 00:35:47.159 [Pipeline] // stage 00:35:47.164 [Pipeline] } 00:35:47.178 [Pipeline] // node 00:35:47.184 [Pipeline] End of Pipeline 00:35:47.227 Finished: SUCCESS