00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 139 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3640 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.134 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.146 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.159 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.159 > git config core.sparsecheckout # timeout=10 00:00:07.172 > git read-tree -mu HEAD # timeout=10 00:00:07.189 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.208 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.209 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.321 [Pipeline] Start of Pipeline 00:00:07.336 [Pipeline] library 00:00:07.338 Loading library shm_lib@master 00:00:07.338 Library shm_lib@master is cached. Copying from home. 00:00:07.354 [Pipeline] node 00:00:07.366 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.367 [Pipeline] { 00:00:07.378 [Pipeline] catchError 00:00:07.379 [Pipeline] { 00:00:07.393 [Pipeline] wrap 00:00:07.402 [Pipeline] { 00:00:07.411 [Pipeline] stage 00:00:07.412 [Pipeline] { (Prologue) 00:00:07.428 [Pipeline] echo 00:00:07.430 Node: VM-host-SM38 00:00:07.437 [Pipeline] cleanWs 00:00:07.447 [WS-CLEANUP] Deleting project workspace... 00:00:07.447 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.455 [WS-CLEANUP] done 00:00:07.714 [Pipeline] setCustomBuildProperty 00:00:07.805 [Pipeline] httpRequest 00:00:10.823 [Pipeline] echo 00:00:10.825 Sorcerer 10.211.164.101 is dead 00:00:10.834 [Pipeline] httpRequest 00:00:12.475 [Pipeline] echo 00:00:12.477 Sorcerer 10.211.164.101 is alive 00:00:12.487 [Pipeline] retry 00:00:12.489 [Pipeline] { 00:00:12.501 [Pipeline] httpRequest 00:00:12.507 HttpMethod: GET 00:00:12.507 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.508 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.535 Response Code: HTTP/1.1 200 OK 00:00:12.536 Success: Status code 200 is in the accepted range: 200,404 00:00:12.537 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:19.032 [Pipeline] } 00:00:19.050 [Pipeline] // retry 00:00:19.057 [Pipeline] sh 00:00:19.346 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:19.364 [Pipeline] httpRequest 00:00:19.860 [Pipeline] echo 00:00:19.862 Sorcerer 10.211.164.101 is alive 00:00:19.872 [Pipeline] retry 00:00:19.874 [Pipeline] { 00:00:19.886 [Pipeline] httpRequest 00:00:19.892 HttpMethod: GET 00:00:19.892 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:19.893 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:19.913 Response Code: HTTP/1.1 200 OK 00:00:19.914 Success: Status code 200 is in the accepted range: 200,404 00:00:19.914 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:31.955 [Pipeline] } 00:01:31.973 [Pipeline] // retry 00:01:31.980 [Pipeline] sh 00:01:32.268 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:34.847 [Pipeline] sh 00:01:35.124 + git -C spdk log --oneline -n5 00:01:35.124 b18e1bd62 version: v24.09.1-pre 00:01:35.124 19524ad45 version: v24.09 00:01:35.124 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:35.124 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:35.124 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:35.146 [Pipeline] withCredentials 00:01:35.158 > git --version # timeout=10 00:01:35.170 > git --version # 'git version 2.39.2' 00:01:35.190 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:35.192 [Pipeline] { 00:01:35.202 [Pipeline] retry 00:01:35.204 [Pipeline] { 00:01:35.220 [Pipeline] sh 00:01:35.504 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:35.518 [Pipeline] } 00:01:35.535 [Pipeline] // retry 00:01:35.540 [Pipeline] } 00:01:35.556 [Pipeline] // withCredentials 00:01:35.564 [Pipeline] httpRequest 00:01:36.127 [Pipeline] echo 00:01:36.129 Sorcerer 10.211.164.101 is alive 00:01:36.139 [Pipeline] retry 00:01:36.141 [Pipeline] { 00:01:36.155 [Pipeline] httpRequest 00:01:36.160 HttpMethod: GET 00:01:36.161 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:36.161 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:36.163 Response Code: HTTP/1.1 200 OK 00:01:36.164 Success: Status code 200 is in the accepted range: 200,404 00:01:36.164 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:42.273 [Pipeline] } 00:01:42.290 [Pipeline] // retry 00:01:42.298 [Pipeline] sh 00:01:42.582 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.513 [Pipeline] sh 00:01:44.798 + git -C dpdk log --oneline -n5 00:01:44.798 eeb0605f11 version: 23.11.0 00:01:44.798 238778122a doc: update release notes for 23.11 00:01:44.798 46aa6b3cfc doc: fix description of RSS features 00:01:44.798 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:44.798 7e421ae345 devtools: support skipping forbid rule check 00:01:44.817 [Pipeline] writeFile 00:01:44.833 [Pipeline] sh 00:01:45.120 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:45.134 [Pipeline] sh 00:01:45.419 + cat autorun-spdk.conf 00:01:45.419 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.419 SPDK_TEST_NVME=1 00:01:45.419 SPDK_TEST_FTL=1 00:01:45.419 SPDK_TEST_ISAL=1 00:01:45.419 SPDK_RUN_ASAN=1 00:01:45.419 SPDK_RUN_UBSAN=1 00:01:45.419 SPDK_TEST_XNVME=1 00:01:45.419 SPDK_TEST_NVME_FDP=1 00:01:45.419 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.419 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:45.419 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.429 RUN_NIGHTLY=1 00:01:45.430 [Pipeline] } 00:01:45.444 [Pipeline] // stage 00:01:45.496 [Pipeline] stage 00:01:45.498 [Pipeline] { (Run VM) 00:01:45.511 [Pipeline] sh 00:01:45.823 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:45.823 + echo 'Start stage prepare_nvme.sh' 00:01:45.823 Start stage prepare_nvme.sh 00:01:45.823 + [[ -n 8 ]] 00:01:45.823 + disk_prefix=ex8 00:01:45.823 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:45.823 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:45.823 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:45.823 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.823 ++ SPDK_TEST_NVME=1 00:01:45.823 ++ SPDK_TEST_FTL=1 00:01:45.823 ++ SPDK_TEST_ISAL=1 00:01:45.823 ++ SPDK_RUN_ASAN=1 00:01:45.823 ++ SPDK_RUN_UBSAN=1 00:01:45.823 ++ SPDK_TEST_XNVME=1 00:01:45.823 ++ SPDK_TEST_NVME_FDP=1 00:01:45.823 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:45.823 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:45.823 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.823 ++ RUN_NIGHTLY=1 00:01:45.823 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:45.823 + nvme_files=() 00:01:45.823 + declare -A nvme_files 00:01:45.823 + backend_dir=/var/lib/libvirt/images/backends 00:01:45.823 + nvme_files['nvme.img']=5G 00:01:45.823 + nvme_files['nvme-cmb.img']=5G 00:01:45.823 + nvme_files['nvme-multi0.img']=4G 00:01:45.823 + nvme_files['nvme-multi1.img']=4G 00:01:45.823 + nvme_files['nvme-multi2.img']=4G 00:01:45.823 + nvme_files['nvme-openstack.img']=8G 00:01:45.823 + nvme_files['nvme-zns.img']=5G 00:01:45.823 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:45.823 + (( SPDK_TEST_FTL == 1 )) 00:01:45.823 + nvme_files["nvme-ftl.img"]=6G 00:01:45.823 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:45.823 + nvme_files["nvme-fdp.img"]=1G 00:01:45.823 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:45.823 + for nvme in "${!nvme_files[@]}" 00:01:45.824 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:45.824 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:45.824 + for nvme in "${!nvme_files[@]}" 00:01:45.824 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:46.430 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:46.430 + for nvme in "${!nvme_files[@]}" 00:01:46.430 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:46.693 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.693 + for nvme in "${!nvme_files[@]}" 00:01:46.693 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:46.693 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.693 + for nvme in "${!nvme_files[@]}" 00:01:46.693 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:47.266 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.266 + for nvme in "${!nvme_files[@]}" 00:01:47.266 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:47.266 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.266 + for nvme in "${!nvme_files[@]}" 00:01:47.266 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:47.266 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.266 + for nvme in "${!nvme_files[@]}" 00:01:47.266 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:47.528 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:47.528 + for nvme in "${!nvme_files[@]}" 00:01:47.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:48.100 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.100 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:48.100 + echo 'End stage prepare_nvme.sh' 00:01:48.100 End stage prepare_nvme.sh 00:01:48.113 [Pipeline] sh 00:01:48.398 + DISTRO=fedora39 00:01:48.398 + CPUS=10 00:01:48.398 + RAM=12288 00:01:48.398 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:48.398 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:48.398 00:01:48.398 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:48.398 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:48.398 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:48.398 HELP=0 00:01:48.398 DRY_RUN=0 00:01:48.398 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:48.398 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:48.398 NVME_AUTO_CREATE=0 00:01:48.398 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:48.398 NVME_CMB=,,,, 00:01:48.398 NVME_PMR=,,,, 00:01:48.398 NVME_ZNS=,,,, 00:01:48.398 NVME_MS=true,,,, 00:01:48.398 NVME_FDP=,,,on, 00:01:48.398 SPDK_VAGRANT_DISTRO=fedora39 00:01:48.398 SPDK_VAGRANT_VMCPU=10 00:01:48.398 SPDK_VAGRANT_VMRAM=12288 00:01:48.398 SPDK_VAGRANT_PROVIDER=libvirt 00:01:48.398 SPDK_VAGRANT_HTTP_PROXY= 00:01:48.398 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:48.398 SPDK_OPENSTACK_NETWORK=0 00:01:48.398 VAGRANT_PACKAGE_BOX=0 00:01:48.398 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:48.398 FORCE_DISTRO=true 00:01:48.398 VAGRANT_BOX_VERSION= 00:01:48.398 EXTRA_VAGRANTFILES= 00:01:48.398 NIC_MODEL=e1000 00:01:48.398 00:01:48.398 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:48.398 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:50.943 Bringing machine 'default' up with 'libvirt' provider... 00:01:51.204 ==> default: Creating image (snapshot of base box volume). 00:01:51.204 ==> default: Creating domain with the following settings... 00:01:51.204 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731898914_194bf4e8af8315928784 00:01:51.204 ==> default: -- Domain type: kvm 00:01:51.204 ==> default: -- Cpus: 10 00:01:51.204 ==> default: -- Feature: acpi 00:01:51.204 ==> default: -- Feature: apic 00:01:51.204 ==> default: -- Feature: pae 00:01:51.204 ==> default: -- Memory: 12288M 00:01:51.204 ==> default: -- Memory Backing: hugepages: 00:01:51.204 ==> default: -- Management MAC: 00:01:51.204 ==> default: -- Loader: 00:01:51.204 ==> default: -- Nvram: 00:01:51.204 ==> default: -- Base box: spdk/fedora39 00:01:51.204 ==> default: -- Storage pool: default 00:01:51.204 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731898914_194bf4e8af8315928784.img (20G) 00:01:51.204 ==> default: -- Volume Cache: default 00:01:51.204 ==> default: -- Kernel: 00:01:51.204 ==> default: -- Initrd: 00:01:51.204 ==> default: -- Graphics Type: vnc 00:01:51.204 ==> default: -- Graphics Port: -1 00:01:51.204 ==> default: -- Graphics IP: 127.0.0.1 00:01:51.204 ==> default: -- Graphics Password: Not defined 00:01:51.204 ==> default: -- Video Type: cirrus 00:01:51.204 ==> default: -- Video VRAM: 9216 00:01:51.204 ==> default: -- Sound Type: 00:01:51.204 ==> default: -- Keymap: en-us 00:01:51.204 ==> default: -- TPM Path: 00:01:51.204 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:51.204 ==> default: -- Command line args: 00:01:51.204 ==> default: -> value=-device, 00:01:51.204 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:51.204 ==> default: -> value=-drive, 00:01:51.204 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:51.204 ==> default: -> value=-device, 00:01:51.204 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:51.204 ==> default: -> value=-device, 00:01:51.204 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:51.204 ==> default: -> value=-drive, 00:01:51.205 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:51.205 ==> default: -> value=-drive, 00:01:51.205 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.205 ==> default: -> value=-drive, 00:01:51.205 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.205 ==> default: -> value=-drive, 00:01:51.205 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:51.205 ==> default: -> value=-drive, 00:01:51.205 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:51.205 ==> default: -> value=-device, 00:01:51.205 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.205 ==> default: Creating shared folders metadata... 00:01:51.205 ==> default: Starting domain. 00:01:52.146 ==> default: Waiting for domain to get an IP address... 00:02:07.031 ==> default: Waiting for SSH to become available... 00:02:07.031 ==> default: Configuring and enabling network interfaces... 00:02:11.239 default: SSH address: 192.168.121.43:22 00:02:11.239 default: SSH username: vagrant 00:02:11.239 default: SSH auth method: private key 00:02:13.153 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:19.760 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:25.024 ==> default: Mounting SSHFS shared folder... 00:02:26.399 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:26.399 ==> default: Checking Mount.. 00:02:27.773 ==> default: Folder Successfully Mounted! 00:02:27.773 00:02:27.773 SUCCESS! 00:02:27.773 00:02:27.773 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:27.773 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:27.773 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:27.773 00:02:27.782 [Pipeline] } 00:02:27.798 [Pipeline] // stage 00:02:27.807 [Pipeline] dir 00:02:27.808 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:27.810 [Pipeline] { 00:02:27.823 [Pipeline] catchError 00:02:27.825 [Pipeline] { 00:02:27.838 [Pipeline] sh 00:02:28.118 + vagrant ssh-config --host vagrant 00:02:28.118 + sed -ne '/^Host/,$p' 00:02:28.118 + tee ssh_conf 00:02:30.129 Host vagrant 00:02:30.129 HostName 192.168.121.43 00:02:30.129 User vagrant 00:02:30.129 Port 22 00:02:30.129 UserKnownHostsFile /dev/null 00:02:30.129 StrictHostKeyChecking no 00:02:30.129 PasswordAuthentication no 00:02:30.129 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:30.129 IdentitiesOnly yes 00:02:30.129 LogLevel FATAL 00:02:30.129 ForwardAgent yes 00:02:30.129 ForwardX11 yes 00:02:30.129 00:02:30.141 [Pipeline] withEnv 00:02:30.143 [Pipeline] { 00:02:30.156 [Pipeline] sh 00:02:30.434 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:30.434 source /etc/os-release 00:02:30.434 [[ -e /image.version ]] && img=$(< /image.version) 00:02:30.434 # Minimal, systemd-like check. 00:02:30.434 if [[ -e /.dockerenv ]]; then 00:02:30.434 # Clear garbage from the node'\''s name: 00:02:30.434 # agt-er_autotest_547-896 -> autotest_547-896 00:02:30.434 # $HOSTNAME is the actual container id 00:02:30.434 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:30.434 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:30.434 # We can assume this is a mount from a host where container is running, 00:02:30.434 # so fetch its hostname to easily identify the target swarm worker. 00:02:30.434 container="$(< /etc/hostname) ($agent)" 00:02:30.434 else 00:02:30.434 # Fallback 00:02:30.434 container=$agent 00:02:30.434 fi 00:02:30.434 fi 00:02:30.434 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:30.434 ' 00:02:30.445 [Pipeline] } 00:02:30.460 [Pipeline] // withEnv 00:02:30.469 [Pipeline] setCustomBuildProperty 00:02:30.483 [Pipeline] stage 00:02:30.486 [Pipeline] { (Tests) 00:02:30.504 [Pipeline] sh 00:02:30.781 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.052 [Pipeline] sh 00:02:31.331 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:31.602 [Pipeline] timeout 00:02:31.602 Timeout set to expire in 50 min 00:02:31.605 [Pipeline] { 00:02:31.619 [Pipeline] sh 00:02:31.896 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:32.461 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:32.473 [Pipeline] sh 00:02:32.751 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:33.021 [Pipeline] sh 00:02:33.302 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:33.577 [Pipeline] sh 00:02:33.855 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:33.855 ++ readlink -f spdk_repo 00:02:33.855 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:33.855 + [[ -n /home/vagrant/spdk_repo ]] 00:02:33.855 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:33.855 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:33.855 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:33.855 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:33.855 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:33.855 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:33.855 + cd /home/vagrant/spdk_repo 00:02:33.855 + source /etc/os-release 00:02:33.855 ++ NAME='Fedora Linux' 00:02:33.855 ++ VERSION='39 (Cloud Edition)' 00:02:33.855 ++ ID=fedora 00:02:33.855 ++ VERSION_ID=39 00:02:33.855 ++ VERSION_CODENAME= 00:02:33.855 ++ PLATFORM_ID=platform:f39 00:02:33.855 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:33.855 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:33.855 ++ LOGO=fedora-logo-icon 00:02:33.855 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:33.855 ++ HOME_URL=https://fedoraproject.org/ 00:02:33.855 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:33.855 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:33.855 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:33.855 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:33.855 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:33.855 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:33.855 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:33.855 ++ SUPPORT_END=2024-11-12 00:02:33.855 ++ VARIANT='Cloud Edition' 00:02:33.855 ++ VARIANT_ID=cloud 00:02:33.855 + uname -a 00:02:33.855 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:33.855 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:34.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:34.420 Hugepages 00:02:34.420 node hugesize free / total 00:02:34.420 node0 1048576kB 0 / 0 00:02:34.420 node0 2048kB 0 / 0 00:02:34.420 00:02:34.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.420 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:34.420 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:34.678 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:34.678 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:34.678 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:34.678 + rm -f /tmp/spdk-ld-path 00:02:34.678 + source autorun-spdk.conf 00:02:34.678 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.678 ++ SPDK_TEST_NVME=1 00:02:34.678 ++ SPDK_TEST_FTL=1 00:02:34.678 ++ SPDK_TEST_ISAL=1 00:02:34.678 ++ SPDK_RUN_ASAN=1 00:02:34.678 ++ SPDK_RUN_UBSAN=1 00:02:34.678 ++ SPDK_TEST_XNVME=1 00:02:34.678 ++ SPDK_TEST_NVME_FDP=1 00:02:34.678 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:34.678 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.678 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.678 ++ RUN_NIGHTLY=1 00:02:34.678 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:34.678 + [[ -n '' ]] 00:02:34.678 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:34.678 + for M in /var/spdk/build-*-manifest.txt 00:02:34.678 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:34.678 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.678 + for M in /var/spdk/build-*-manifest.txt 00:02:34.678 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:34.678 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.678 + for M in /var/spdk/build-*-manifest.txt 00:02:34.678 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:34.678 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:34.678 ++ uname 00:02:34.678 + [[ Linux == \L\i\n\u\x ]] 00:02:34.678 + sudo dmesg -T 00:02:34.678 + sudo dmesg --clear 00:02:34.678 + dmesg_pid=5766 00:02:34.678 + [[ Fedora Linux == FreeBSD ]] 00:02:34.678 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:34.678 + sudo dmesg -Tw 00:02:34.678 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:34.678 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:34.679 + [[ -x /usr/src/fio-static/fio ]] 00:02:34.679 + export FIO_BIN=/usr/src/fio-static/fio 00:02:34.679 + FIO_BIN=/usr/src/fio-static/fio 00:02:34.679 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:34.679 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:34.679 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:34.679 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:34.679 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:34.679 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:34.679 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:34.679 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:34.679 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:34.679 Test configuration: 00:02:34.679 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.679 SPDK_TEST_NVME=1 00:02:34.679 SPDK_TEST_FTL=1 00:02:34.679 SPDK_TEST_ISAL=1 00:02:34.679 SPDK_RUN_ASAN=1 00:02:34.679 SPDK_RUN_UBSAN=1 00:02:34.679 SPDK_TEST_XNVME=1 00:02:34.679 SPDK_TEST_NVME_FDP=1 00:02:34.679 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:34.679 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.679 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.679 RUN_NIGHTLY=1 03:02:38 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:34.679 03:02:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:34.679 03:02:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:34.679 03:02:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:34.679 03:02:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.679 03:02:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.679 03:02:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.679 03:02:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.679 03:02:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.679 03:02:38 -- paths/export.sh@5 -- $ export PATH 00:02:34.679 03:02:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.679 03:02:38 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:34.679 03:02:38 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:34.679 03:02:38 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731898958.XXXXXX 00:02:34.679 03:02:38 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731898958.BkV7Eq 00:02:34.679 03:02:38 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:34.679 03:02:38 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:34.679 03:02:38 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:34.679 03:02:38 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:34.679 03:02:38 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:34.679 03:02:38 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:34.679 03:02:38 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:34.679 03:02:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:34.679 03:02:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.679 03:02:38 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:34.679 03:02:38 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:34.679 03:02:38 -- pm/common@17 -- $ local monitor 00:02:34.679 03:02:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.679 03:02:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.679 03:02:38 -- pm/common@25 -- $ sleep 1 00:02:34.679 03:02:38 -- pm/common@21 -- $ date +%s 00:02:34.679 03:02:38 -- pm/common@21 -- $ date +%s 00:02:34.679 03:02:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731898958 00:02:34.679 03:02:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731898958 00:02:34.679 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731898958_collect-vmstat.pm.log 00:02:34.679 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731898958_collect-cpu-load.pm.log 00:02:36.053 03:02:39 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:36.053 03:02:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.053 03:02:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.053 03:02:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.053 03:02:39 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.053 Mon Nov 18 03:02:39 AM UTC 2024 00:02:36.053 03:02:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.053 v24.09-rc1-9-gb18e1bd62 00:02:36.053 03:02:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:36.053 03:02:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:36.053 03:02:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:36.053 03:02:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:36.053 03:02:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.053 ************************************ 00:02:36.053 START TEST asan 00:02:36.053 ************************************ 00:02:36.053 using asan 00:02:36.053 03:02:39 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:36.053 00:02:36.053 real 0m0.000s 00:02:36.053 user 0m0.000s 00:02:36.053 sys 0m0.000s 00:02:36.053 03:02:39 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:36.053 ************************************ 00:02:36.053 03:02:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.053 END TEST asan 00:02:36.053 ************************************ 00:02:36.053 03:02:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.053 03:02:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.053 03:02:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:36.053 03:02:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:36.053 03:02:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.053 ************************************ 00:02:36.053 START TEST ubsan 00:02:36.053 ************************************ 00:02:36.053 using ubsan 00:02:36.053 03:02:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:36.053 00:02:36.053 real 0m0.000s 00:02:36.053 user 0m0.000s 00:02:36.053 sys 0m0.000s 00:02:36.053 03:02:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:36.053 ************************************ 00:02:36.053 END TEST ubsan 00:02:36.053 ************************************ 00:02:36.053 03:02:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.053 03:02:39 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:36.053 03:02:39 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.053 03:02:39 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.053 03:02:39 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:36.053 03:02:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:36.053 03:02:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.053 ************************************ 00:02:36.053 START TEST build_native_dpdk 00:02:36.053 ************************************ 00:02:36.053 03:02:39 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.053 eeb0605f11 version: 23.11.0 00:02:36.053 238778122a doc: update release notes for 23.11 00:02:36.053 46aa6b3cfc doc: fix description of RSS features 00:02:36.053 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:36.053 7e421ae345 devtools: support skipping forbid rule check 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:36.053 patching file config/rte_config.h 00:02:36.053 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.053 03:02:39 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.053 03:02:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:36.054 patching file lib/pcapng/rte_pcapng.c 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.054 03:02:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:36.054 03:02:39 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:40.242 The Meson build system 00:02:40.242 Version: 1.5.0 00:02:40.242 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:40.242 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:40.242 Build type: native build 00:02:40.242 Program cat found: YES (/usr/bin/cat) 00:02:40.242 Project name: DPDK 00:02:40.242 Project version: 23.11.0 00:02:40.242 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.242 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:40.242 Host machine cpu family: x86_64 00:02:40.242 Host machine cpu: x86_64 00:02:40.242 Message: ## Building in Developer Mode ## 00:02:40.242 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:40.242 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:40.242 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:40.242 Program python3 found: YES (/usr/bin/python3) 00:02:40.242 Program cat found: YES (/usr/bin/cat) 00:02:40.242 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:40.242 Compiler for C supports arguments -march=native: YES 00:02:40.242 Checking for size of "void *" : 8 00:02:40.242 Checking for size of "void *" : 8 (cached) 00:02:40.242 Library m found: YES 00:02:40.242 Library numa found: YES 00:02:40.242 Has header "numaif.h" : YES 00:02:40.242 Library fdt found: NO 00:02:40.242 Library execinfo found: NO 00:02:40.242 Has header "execinfo.h" : YES 00:02:40.242 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.242 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:40.242 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:40.242 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:40.242 Run-time dependency openssl found: YES 3.1.1 00:02:40.242 Run-time dependency libpcap found: YES 1.10.4 00:02:40.242 Has header "pcap.h" with dependency libpcap: YES 00:02:40.242 Compiler for C supports arguments -Wcast-qual: YES 00:02:40.242 Compiler for C supports arguments -Wdeprecated: YES 00:02:40.242 Compiler for C supports arguments -Wformat: YES 00:02:40.242 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:40.242 Compiler for C supports arguments -Wformat-security: NO 00:02:40.242 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.242 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:40.242 Compiler for C supports arguments -Wnested-externs: YES 00:02:40.242 Compiler for C supports arguments -Wold-style-definition: YES 00:02:40.242 Compiler for C supports arguments -Wpointer-arith: YES 00:02:40.242 Compiler for C supports arguments -Wsign-compare: YES 00:02:40.242 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:40.242 Compiler for C supports arguments -Wundef: YES 00:02:40.242 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.242 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:40.242 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:40.242 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.242 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:40.242 Program objdump found: YES (/usr/bin/objdump) 00:02:40.242 Compiler for C supports arguments -mavx512f: YES 00:02:40.242 Checking if "AVX512 checking" compiles: YES 00:02:40.242 Fetching value of define "__SSE4_2__" : 1 00:02:40.243 Fetching value of define "__AES__" : 1 00:02:40.243 Fetching value of define "__AVX__" : 1 00:02:40.243 Fetching value of define "__AVX2__" : 1 00:02:40.243 Fetching value of define "__AVX512BW__" : 1 00:02:40.243 Fetching value of define "__AVX512CD__" : 1 00:02:40.243 Fetching value of define "__AVX512DQ__" : 1 00:02:40.243 Fetching value of define "__AVX512F__" : 1 00:02:40.243 Fetching value of define "__AVX512VL__" : 1 00:02:40.243 Fetching value of define "__PCLMUL__" : 1 00:02:40.243 Fetching value of define "__RDRND__" : 1 00:02:40.243 Fetching value of define "__RDSEED__" : 1 00:02:40.243 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:40.243 Fetching value of define "__znver1__" : (undefined) 00:02:40.243 Fetching value of define "__znver2__" : (undefined) 00:02:40.243 Fetching value of define "__znver3__" : (undefined) 00:02:40.243 Fetching value of define "__znver4__" : (undefined) 00:02:40.243 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:40.243 Message: lib/log: Defining dependency "log" 00:02:40.243 Message: lib/kvargs: Defining dependency "kvargs" 00:02:40.243 Message: lib/telemetry: Defining dependency "telemetry" 00:02:40.243 Checking for function "getentropy" : NO 00:02:40.243 Message: lib/eal: Defining dependency "eal" 00:02:40.243 Message: lib/ring: Defining dependency "ring" 00:02:40.243 Message: lib/rcu: Defining dependency "rcu" 00:02:40.243 Message: lib/mempool: Defining dependency "mempool" 00:02:40.243 Message: lib/mbuf: Defining dependency "mbuf" 00:02:40.243 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.243 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:40.243 Compiler for C supports arguments -mpclmul: YES 00:02:40.243 Compiler for C supports arguments -maes: YES 00:02:40.243 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.243 Compiler for C supports arguments -mavx512bw: YES 00:02:40.243 Compiler for C supports arguments -mavx512dq: YES 00:02:40.243 Compiler for C supports arguments -mavx512vl: YES 00:02:40.243 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:40.243 Compiler for C supports arguments -mavx2: YES 00:02:40.243 Compiler for C supports arguments -mavx: YES 00:02:40.243 Message: lib/net: Defining dependency "net" 00:02:40.243 Message: lib/meter: Defining dependency "meter" 00:02:40.243 Message: lib/ethdev: Defining dependency "ethdev" 00:02:40.243 Message: lib/pci: Defining dependency "pci" 00:02:40.243 Message: lib/cmdline: Defining dependency "cmdline" 00:02:40.243 Message: lib/metrics: Defining dependency "metrics" 00:02:40.243 Message: lib/hash: Defining dependency "hash" 00:02:40.243 Message: lib/timer: Defining dependency "timer" 00:02:40.243 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.243 Message: lib/acl: Defining dependency "acl" 00:02:40.243 Message: lib/bbdev: Defining dependency "bbdev" 00:02:40.243 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:40.243 Run-time dependency libelf found: YES 0.191 00:02:40.243 Message: lib/bpf: Defining dependency "bpf" 00:02:40.243 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:40.243 Message: lib/compressdev: Defining dependency "compressdev" 00:02:40.243 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:40.243 Message: lib/distributor: Defining dependency "distributor" 00:02:40.243 Message: lib/dmadev: Defining dependency "dmadev" 00:02:40.243 Message: lib/efd: Defining dependency "efd" 00:02:40.243 Message: lib/eventdev: Defining dependency "eventdev" 00:02:40.243 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:40.243 Message: lib/gpudev: Defining dependency "gpudev" 00:02:40.243 Message: lib/gro: Defining dependency "gro" 00:02:40.243 Message: lib/gso: Defining dependency "gso" 00:02:40.243 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:40.243 Message: lib/jobstats: Defining dependency "jobstats" 00:02:40.243 Message: lib/latencystats: Defining dependency "latencystats" 00:02:40.243 Message: lib/lpm: Defining dependency "lpm" 00:02:40.243 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512IFMA__" : 1 00:02:40.243 Message: lib/member: Defining dependency "member" 00:02:40.243 Message: lib/pcapng: Defining dependency "pcapng" 00:02:40.243 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:40.243 Message: lib/power: Defining dependency "power" 00:02:40.243 Message: lib/rawdev: Defining dependency "rawdev" 00:02:40.243 Message: lib/regexdev: Defining dependency "regexdev" 00:02:40.243 Message: lib/mldev: Defining dependency "mldev" 00:02:40.243 Message: lib/rib: Defining dependency "rib" 00:02:40.243 Message: lib/reorder: Defining dependency "reorder" 00:02:40.243 Message: lib/sched: Defining dependency "sched" 00:02:40.243 Message: lib/security: Defining dependency "security" 00:02:40.243 Message: lib/stack: Defining dependency "stack" 00:02:40.243 Has header "linux/userfaultfd.h" : YES 00:02:40.243 Has header "linux/vduse.h" : YES 00:02:40.243 Message: lib/vhost: Defining dependency "vhost" 00:02:40.243 Message: lib/ipsec: Defining dependency "ipsec" 00:02:40.243 Message: lib/pdcp: Defining dependency "pdcp" 00:02:40.243 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:40.243 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.243 Message: lib/fib: Defining dependency "fib" 00:02:40.243 Message: lib/port: Defining dependency "port" 00:02:40.243 Message: lib/pdump: Defining dependency "pdump" 00:02:40.243 Message: lib/table: Defining dependency "table" 00:02:40.243 Message: lib/pipeline: Defining dependency "pipeline" 00:02:40.243 Message: lib/graph: Defining dependency "graph" 00:02:40.243 Message: lib/node: Defining dependency "node" 00:02:40.243 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:40.243 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:40.243 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:40.243 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.180 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:41.180 Compiler for C supports arguments -Wno-unused-value: YES 00:02:41.180 Compiler for C supports arguments -Wno-format: YES 00:02:41.180 Compiler for C supports arguments -Wno-format-security: YES 00:02:41.180 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:41.180 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:41.180 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:41.180 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:41.180 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.180 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.180 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.180 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:41.180 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:41.180 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:41.180 Has header "sys/epoll.h" : YES 00:02:41.180 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.180 Configuring doxy-api-html.conf using configuration 00:02:41.180 Configuring doxy-api-man.conf using configuration 00:02:41.180 Program mandb found: YES (/usr/bin/mandb) 00:02:41.180 Program sphinx-build found: NO 00:02:41.180 Configuring rte_build_config.h using configuration 00:02:41.180 Message: 00:02:41.180 ================= 00:02:41.180 Applications Enabled 00:02:41.180 ================= 00:02:41.180 00:02:41.180 apps: 00:02:41.180 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:41.180 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:41.180 test-pmd, test-regex, test-sad, test-security-perf, 00:02:41.180 00:02:41.180 Message: 00:02:41.180 ================= 00:02:41.180 Libraries Enabled 00:02:41.180 ================= 00:02:41.180 00:02:41.180 libs: 00:02:41.180 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.180 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:41.180 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:41.180 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:41.180 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:41.180 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:41.180 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:41.180 00:02:41.180 00:02:41.180 Message: 00:02:41.180 =============== 00:02:41.180 Drivers Enabled 00:02:41.180 =============== 00:02:41.180 00:02:41.180 common: 00:02:41.180 00:02:41.180 bus: 00:02:41.180 pci, vdev, 00:02:41.180 mempool: 00:02:41.180 ring, 00:02:41.180 dma: 00:02:41.180 00:02:41.180 net: 00:02:41.180 i40e, 00:02:41.180 raw: 00:02:41.180 00:02:41.180 crypto: 00:02:41.180 00:02:41.180 compress: 00:02:41.180 00:02:41.180 regex: 00:02:41.180 00:02:41.180 ml: 00:02:41.180 00:02:41.180 vdpa: 00:02:41.180 00:02:41.180 event: 00:02:41.180 00:02:41.180 baseband: 00:02:41.180 00:02:41.180 gpu: 00:02:41.180 00:02:41.180 00:02:41.180 Message: 00:02:41.180 ================= 00:02:41.180 Content Skipped 00:02:41.180 ================= 00:02:41.180 00:02:41.180 apps: 00:02:41.180 00:02:41.180 libs: 00:02:41.180 00:02:41.180 drivers: 00:02:41.180 common/cpt: not in enabled drivers build config 00:02:41.180 common/dpaax: not in enabled drivers build config 00:02:41.181 common/iavf: not in enabled drivers build config 00:02:41.181 common/idpf: not in enabled drivers build config 00:02:41.181 common/mvep: not in enabled drivers build config 00:02:41.181 common/octeontx: not in enabled drivers build config 00:02:41.181 bus/auxiliary: not in enabled drivers build config 00:02:41.181 bus/cdx: not in enabled drivers build config 00:02:41.181 bus/dpaa: not in enabled drivers build config 00:02:41.181 bus/fslmc: not in enabled drivers build config 00:02:41.181 bus/ifpga: not in enabled drivers build config 00:02:41.181 bus/platform: not in enabled drivers build config 00:02:41.181 bus/vmbus: not in enabled drivers build config 00:02:41.181 common/cnxk: not in enabled drivers build config 00:02:41.181 common/mlx5: not in enabled drivers build config 00:02:41.181 common/nfp: not in enabled drivers build config 00:02:41.181 common/qat: not in enabled drivers build config 00:02:41.181 common/sfc_efx: not in enabled drivers build config 00:02:41.181 mempool/bucket: not in enabled drivers build config 00:02:41.181 mempool/cnxk: not in enabled drivers build config 00:02:41.181 mempool/dpaa: not in enabled drivers build config 00:02:41.181 mempool/dpaa2: not in enabled drivers build config 00:02:41.181 mempool/octeontx: not in enabled drivers build config 00:02:41.181 mempool/stack: not in enabled drivers build config 00:02:41.181 dma/cnxk: not in enabled drivers build config 00:02:41.181 dma/dpaa: not in enabled drivers build config 00:02:41.181 dma/dpaa2: not in enabled drivers build config 00:02:41.181 dma/hisilicon: not in enabled drivers build config 00:02:41.181 dma/idxd: not in enabled drivers build config 00:02:41.181 dma/ioat: not in enabled drivers build config 00:02:41.181 dma/skeleton: not in enabled drivers build config 00:02:41.181 net/af_packet: not in enabled drivers build config 00:02:41.181 net/af_xdp: not in enabled drivers build config 00:02:41.181 net/ark: not in enabled drivers build config 00:02:41.181 net/atlantic: not in enabled drivers build config 00:02:41.181 net/avp: not in enabled drivers build config 00:02:41.181 net/axgbe: not in enabled drivers build config 00:02:41.181 net/bnx2x: not in enabled drivers build config 00:02:41.181 net/bnxt: not in enabled drivers build config 00:02:41.181 net/bonding: not in enabled drivers build config 00:02:41.181 net/cnxk: not in enabled drivers build config 00:02:41.181 net/cpfl: not in enabled drivers build config 00:02:41.181 net/cxgbe: not in enabled drivers build config 00:02:41.181 net/dpaa: not in enabled drivers build config 00:02:41.181 net/dpaa2: not in enabled drivers build config 00:02:41.181 net/e1000: not in enabled drivers build config 00:02:41.181 net/ena: not in enabled drivers build config 00:02:41.181 net/enetc: not in enabled drivers build config 00:02:41.181 net/enetfec: not in enabled drivers build config 00:02:41.181 net/enic: not in enabled drivers build config 00:02:41.181 net/failsafe: not in enabled drivers build config 00:02:41.181 net/fm10k: not in enabled drivers build config 00:02:41.181 net/gve: not in enabled drivers build config 00:02:41.181 net/hinic: not in enabled drivers build config 00:02:41.181 net/hns3: not in enabled drivers build config 00:02:41.181 net/iavf: not in enabled drivers build config 00:02:41.181 net/ice: not in enabled drivers build config 00:02:41.181 net/idpf: not in enabled drivers build config 00:02:41.181 net/igc: not in enabled drivers build config 00:02:41.181 net/ionic: not in enabled drivers build config 00:02:41.181 net/ipn3ke: not in enabled drivers build config 00:02:41.181 net/ixgbe: not in enabled drivers build config 00:02:41.181 net/mana: not in enabled drivers build config 00:02:41.181 net/memif: not in enabled drivers build config 00:02:41.181 net/mlx4: not in enabled drivers build config 00:02:41.181 net/mlx5: not in enabled drivers build config 00:02:41.181 net/mvneta: not in enabled drivers build config 00:02:41.181 net/mvpp2: not in enabled drivers build config 00:02:41.181 net/netvsc: not in enabled drivers build config 00:02:41.181 net/nfb: not in enabled drivers build config 00:02:41.181 net/nfp: not in enabled drivers build config 00:02:41.181 net/ngbe: not in enabled drivers build config 00:02:41.181 net/null: not in enabled drivers build config 00:02:41.181 net/octeontx: not in enabled drivers build config 00:02:41.181 net/octeon_ep: not in enabled drivers build config 00:02:41.181 net/pcap: not in enabled drivers build config 00:02:41.181 net/pfe: not in enabled drivers build config 00:02:41.181 net/qede: not in enabled drivers build config 00:02:41.181 net/ring: not in enabled drivers build config 00:02:41.181 net/sfc: not in enabled drivers build config 00:02:41.181 net/softnic: not in enabled drivers build config 00:02:41.181 net/tap: not in enabled drivers build config 00:02:41.181 net/thunderx: not in enabled drivers build config 00:02:41.181 net/txgbe: not in enabled drivers build config 00:02:41.181 net/vdev_netvsc: not in enabled drivers build config 00:02:41.181 net/vhost: not in enabled drivers build config 00:02:41.181 net/virtio: not in enabled drivers build config 00:02:41.181 net/vmxnet3: not in enabled drivers build config 00:02:41.181 raw/cnxk_bphy: not in enabled drivers build config 00:02:41.181 raw/cnxk_gpio: not in enabled drivers build config 00:02:41.181 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:41.181 raw/ifpga: not in enabled drivers build config 00:02:41.181 raw/ntb: not in enabled drivers build config 00:02:41.181 raw/skeleton: not in enabled drivers build config 00:02:41.181 crypto/armv8: not in enabled drivers build config 00:02:41.181 crypto/bcmfs: not in enabled drivers build config 00:02:41.181 crypto/caam_jr: not in enabled drivers build config 00:02:41.181 crypto/ccp: not in enabled drivers build config 00:02:41.181 crypto/cnxk: not in enabled drivers build config 00:02:41.181 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.181 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.181 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.181 crypto/mlx5: not in enabled drivers build config 00:02:41.181 crypto/mvsam: not in enabled drivers build config 00:02:41.181 crypto/nitrox: not in enabled drivers build config 00:02:41.181 crypto/null: not in enabled drivers build config 00:02:41.181 crypto/octeontx: not in enabled drivers build config 00:02:41.181 crypto/openssl: not in enabled drivers build config 00:02:41.181 crypto/scheduler: not in enabled drivers build config 00:02:41.181 crypto/uadk: not in enabled drivers build config 00:02:41.181 crypto/virtio: not in enabled drivers build config 00:02:41.181 compress/isal: not in enabled drivers build config 00:02:41.181 compress/mlx5: not in enabled drivers build config 00:02:41.181 compress/octeontx: not in enabled drivers build config 00:02:41.181 compress/zlib: not in enabled drivers build config 00:02:41.181 regex/mlx5: not in enabled drivers build config 00:02:41.181 regex/cn9k: not in enabled drivers build config 00:02:41.181 ml/cnxk: not in enabled drivers build config 00:02:41.181 vdpa/ifc: not in enabled drivers build config 00:02:41.181 vdpa/mlx5: not in enabled drivers build config 00:02:41.181 vdpa/nfp: not in enabled drivers build config 00:02:41.181 vdpa/sfc: not in enabled drivers build config 00:02:41.181 event/cnxk: not in enabled drivers build config 00:02:41.181 event/dlb2: not in enabled drivers build config 00:02:41.181 event/dpaa: not in enabled drivers build config 00:02:41.181 event/dpaa2: not in enabled drivers build config 00:02:41.181 event/dsw: not in enabled drivers build config 00:02:41.181 event/opdl: not in enabled drivers build config 00:02:41.181 event/skeleton: not in enabled drivers build config 00:02:41.181 event/sw: not in enabled drivers build config 00:02:41.181 event/octeontx: not in enabled drivers build config 00:02:41.181 baseband/acc: not in enabled drivers build config 00:02:41.181 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:41.181 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:41.181 baseband/la12xx: not in enabled drivers build config 00:02:41.181 baseband/null: not in enabled drivers build config 00:02:41.181 baseband/turbo_sw: not in enabled drivers build config 00:02:41.181 gpu/cuda: not in enabled drivers build config 00:02:41.181 00:02:41.181 00:02:41.181 Build targets in project: 215 00:02:41.181 00:02:41.181 DPDK 23.11.0 00:02:41.181 00:02:41.181 User defined options 00:02:41.181 libdir : lib 00:02:41.181 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:41.181 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:41.181 c_link_args : 00:02:41.181 enable_docs : false 00:02:41.181 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:41.181 enable_kmods : false 00:02:41.181 machine : native 00:02:41.181 tests : false 00:02:41.181 00:02:41.181 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.181 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:41.440 03:02:44 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:41.440 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:41.440 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.440 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.440 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.440 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.440 [5/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.699 [6/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.699 [7/705] Linking static target lib/librte_kvargs.a 00:02:41.699 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.699 [9/705] Linking static target lib/librte_log.a 00:02:41.699 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.699 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.699 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.699 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.699 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.971 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.971 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.971 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.971 [18/705] Linking target lib/librte_log.so.24.0 00:02:41.971 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.971 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.277 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.277 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.277 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.277 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.277 [25/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:42.277 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.277 [27/705] Linking target lib/librte_kvargs.so.24.0 00:02:42.277 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.277 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:42.277 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.277 [31/705] Linking static target lib/librte_telemetry.a 00:02:42.277 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.277 [33/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:42.538 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.538 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.538 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.538 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.538 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.538 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.538 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.538 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.798 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.798 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:42.798 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.798 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.798 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:43.057 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.057 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.057 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.057 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.057 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.057 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.057 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.057 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.057 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.057 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.057 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.057 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.316 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.316 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.316 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.316 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.316 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.316 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.316 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.316 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.316 [67/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.316 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.575 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.575 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.575 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.575 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.575 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.575 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.575 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.575 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.575 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.575 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.834 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.834 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.834 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.834 [82/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.834 [83/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.834 [84/705] Linking static target lib/librte_ring.a 00:02:44.092 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.092 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.092 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.092 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.092 [89/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.092 [90/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.092 [91/705] Linking static target lib/librte_eal.a 00:02:44.092 [92/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.350 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.350 [94/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.350 [95/705] Linking static target lib/librte_rcu.a 00:02:44.350 [96/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.350 [97/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.350 [98/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.350 [99/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.350 [100/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.350 [101/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.350 [102/705] Linking static target lib/librte_mempool.a 00:02:44.608 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.608 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.608 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.608 [106/705] Linking static target lib/librte_meter.a 00:02:44.608 [107/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.608 [108/705] Linking static target lib/librte_mbuf.a 00:02:44.608 [109/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.608 [110/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.608 [111/705] Linking static target lib/librte_net.a 00:02:44.866 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.866 [113/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.866 [114/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.866 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.866 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.866 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.125 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.125 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.125 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.384 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.643 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.643 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.643 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.643 [125/705] Linking static target lib/librte_pci.a 00:02:45.643 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.643 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.643 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.643 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.643 [130/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.643 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.644 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.902 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.902 [134/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.902 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.902 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.902 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.902 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.902 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.902 [140/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.902 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.902 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.902 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.902 [144/705] Linking static target lib/librte_cmdline.a 00:02:46.160 [145/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.160 [146/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.160 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:46.160 [148/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:46.160 [149/705] Linking static target lib/librte_metrics.a 00:02:46.420 [150/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.420 [151/705] Linking static target lib/librte_timer.a 00:02:46.420 [152/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.420 [153/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.680 [154/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.680 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.680 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:46.680 [157/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:46.938 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:46.938 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:47.197 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:47.197 [161/705] Linking static target lib/librte_bitratestats.a 00:02:47.197 [162/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:47.197 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.197 [164/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:47.197 [165/705] Linking static target lib/librte_bbdev.a 00:02:47.456 [166/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:47.456 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:47.715 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:47.715 [169/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.715 [170/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.715 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:47.715 [172/705] Linking static target lib/librte_ethdev.a 00:02:47.715 [173/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:47.974 [174/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:47.974 [175/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.974 [176/705] Linking static target lib/librte_hash.a 00:02:47.974 [177/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:47.974 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:48.233 [179/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:48.233 [180/705] Linking static target lib/acl/libavx2_tmp.a 00:02:48.233 [181/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.233 [182/705] Linking target lib/librte_eal.so.24.0 00:02:48.233 [183/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:48.233 [184/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:48.233 [185/705] Linking static target lib/librte_cfgfile.a 00:02:48.233 [186/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.233 [187/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:48.233 [188/705] Linking target lib/librte_ring.so.24.0 00:02:48.492 [189/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.492 [190/705] Linking target lib/librte_meter.so.24.0 00:02:48.492 [191/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:48.493 [192/705] Linking target lib/librte_rcu.so.24.0 00:02:48.493 [193/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.493 [194/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.493 [195/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:48.493 [196/705] Linking target lib/librte_mempool.so.24.0 00:02:48.493 [197/705] Linking target lib/librte_pci.so.24.0 00:02:48.493 [198/705] Linking target lib/librte_timer.so.24.0 00:02:48.493 [199/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:48.493 [200/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:48.493 [201/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.493 [202/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:48.493 [203/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:48.493 [204/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.493 [205/705] Linking static target lib/librte_compressdev.a 00:02:48.493 [206/705] Linking target lib/librte_cfgfile.so.24.0 00:02:48.493 [207/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:48.493 [208/705] Linking target lib/librte_mbuf.so.24.0 00:02:48.751 [209/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:48.751 [210/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:48.751 [211/705] Linking target lib/librte_bbdev.so.24.0 00:02:48.751 [212/705] Linking target lib/librte_net.so.24.0 00:02:48.751 [213/705] Linking static target lib/librte_bpf.a 00:02:48.751 [214/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:48.751 [215/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:49.010 [216/705] Linking target lib/librte_cmdline.so.24.0 00:02:49.010 [217/705] Linking target lib/librte_hash.so.24.0 00:02:49.010 [218/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:49.010 [219/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.010 [220/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.010 [221/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.010 [222/705] Linking target lib/librte_compressdev.so.24.0 00:02:49.010 [223/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:49.010 [224/705] Linking static target lib/librte_acl.a 00:02:49.010 [225/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:49.010 [226/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:49.010 [227/705] Linking static target lib/librte_distributor.a 00:02:49.269 [228/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.269 [229/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.269 [230/705] Linking target lib/librte_acl.so.24.0 00:02:49.269 [231/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.269 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.269 [233/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:49.269 [234/705] Linking target lib/librte_distributor.so.24.0 00:02:49.269 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:49.527 [236/705] Linking static target lib/librte_dmadev.a 00:02:49.527 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:49.527 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.527 [239/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:49.527 [240/705] Linking target lib/librte_dmadev.so.24.0 00:02:49.786 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:49.786 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:49.786 [243/705] Linking static target lib/librte_efd.a 00:02:49.786 [244/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:50.044 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:50.044 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.044 [247/705] Linking target lib/librte_efd.so.24.0 00:02:50.044 [248/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:50.044 [249/705] Linking static target lib/librte_cryptodev.a 00:02:50.044 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:50.044 [251/705] Linking static target lib/librte_dispatcher.a 00:02:50.302 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:50.302 [253/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:50.302 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:50.302 [255/705] Linking static target lib/librte_gpudev.a 00:02:50.302 [256/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:50.302 [257/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:50.302 [258/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.570 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:50.846 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:50.846 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:50.846 [262/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:50.846 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:50.846 [264/705] Linking static target lib/librte_gro.a 00:02:50.846 [265/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.846 [266/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:50.846 [267/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.846 [268/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.846 [269/705] Linking target lib/librte_gpudev.so.24.0 00:02:50.846 [270/705] Linking target lib/librte_cryptodev.so.24.0 00:02:51.104 [271/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:51.104 [272/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.104 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:51.104 [274/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.104 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:51.104 [276/705] Linking target lib/librte_ethdev.so.24.0 00:02:51.104 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:51.104 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:51.104 [279/705] Linking static target lib/librte_gso.a 00:02:51.104 [280/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:51.104 [281/705] Linking target lib/librte_metrics.so.24.0 00:02:51.362 [282/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:51.362 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:51.362 [284/705] Linking target lib/librte_bpf.so.24.0 00:02:51.362 [285/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:51.362 [286/705] Linking static target lib/librte_eventdev.a 00:02:51.362 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:51.362 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:51.362 [289/705] Linking target lib/librte_bitratestats.so.24.0 00:02:51.362 [290/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.362 [291/705] Linking target lib/librte_gro.so.24.0 00:02:51.362 [292/705] Linking target lib/librte_gso.so.24.0 00:02:51.362 [293/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:51.362 [294/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:51.363 [295/705] Linking static target lib/librte_jobstats.a 00:02:51.363 [296/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:51.363 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:51.620 [298/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.621 [299/705] Linking static target lib/librte_ip_frag.a 00:02:51.621 [300/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.621 [301/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.621 [302/705] Linking target lib/librte_jobstats.so.24.0 00:02:51.621 [303/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:51.621 [304/705] Linking static target lib/librte_latencystats.a 00:02:51.879 [305/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.880 [306/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.880 [307/705] Linking target lib/librte_ip_frag.so.24.0 00:02:51.880 [308/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.880 [309/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.880 [310/705] Linking target lib/librte_latencystats.so.24.0 00:02:51.880 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:51.880 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.138 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:52.138 [314/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.138 [315/705] Linking static target lib/librte_lpm.a 00:02:52.138 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:52.138 [317/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.138 [318/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:52.138 [319/705] Linking static target lib/librte_pcapng.a 00:02:52.138 [320/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:52.138 [321/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.138 [322/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.138 [323/705] Linking target lib/librte_lpm.so.24.0 00:02:52.396 [324/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.396 [325/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.396 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:52.396 [327/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.396 [328/705] Linking target lib/librte_pcapng.so.24.0 00:02:52.396 [329/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.396 [330/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.396 [331/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.396 [332/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:52.396 [333/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.655 [334/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:52.655 [335/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.655 [336/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:52.655 [337/705] Linking static target lib/librte_power.a 00:02:52.655 [338/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.655 [339/705] Linking static target lib/librte_regexdev.a 00:02:52.655 [340/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:52.655 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:52.655 [342/705] Linking static target lib/librte_rawdev.a 00:02:52.655 [343/705] Linking target lib/librte_eventdev.so.24.0 00:02:52.655 [344/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:52.913 [345/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:52.913 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:52.913 [347/705] Linking static target lib/librte_mldev.a 00:02:52.913 [348/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:52.913 [349/705] Linking target lib/librte_dispatcher.so.24.0 00:02:52.913 [350/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:52.913 [351/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.171 [352/705] Linking target lib/librte_rawdev.so.24.0 00:02:53.171 [353/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.171 [354/705] Linking target lib/librte_power.so.24.0 00:02:53.171 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:53.171 [356/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:53.171 [357/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.171 [358/705] Linking static target lib/librte_member.a 00:02:53.171 [359/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.171 [360/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.171 [361/705] Linking static target lib/librte_reorder.a 00:02:53.171 [362/705] Linking target lib/librte_regexdev.so.24.0 00:02:53.429 [363/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.429 [364/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.429 [365/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:53.429 [366/705] Linking static target lib/librte_rib.a 00:02:53.429 [367/705] Linking target lib/librte_member.so.24.0 00:02:53.429 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.429 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.429 [370/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.429 [371/705] Linking target lib/librte_reorder.so.24.0 00:02:53.429 [372/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.429 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.429 [374/705] Linking static target lib/librte_stack.a 00:02:53.429 [375/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:53.429 [376/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.429 [377/705] Linking static target lib/librte_security.a 00:02:53.686 [378/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.686 [379/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.686 [380/705] Linking target lib/librte_stack.so.24.0 00:02:53.686 [381/705] Linking target lib/librte_mldev.so.24.0 00:02:53.686 [382/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.686 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.686 [384/705] Linking target lib/librte_rib.so.24.0 00:02:53.686 [385/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.686 [386/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:53.945 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.945 [388/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.945 [389/705] Linking target lib/librte_security.so.24.0 00:02:53.945 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:53.945 [391/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.204 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.204 [393/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:54.204 [394/705] Linking static target lib/librte_sched.a 00:02:54.204 [395/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.204 [396/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.204 [397/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.462 [398/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.462 [399/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.462 [400/705] Linking target lib/librte_sched.so.24.0 00:02:54.462 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:54.462 [402/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:54.720 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:54.720 [404/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:54.720 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:54.720 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:54.978 [407/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:54.978 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:54.978 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:54.978 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:54.978 [411/705] Linking static target lib/librte_ipsec.a 00:02:54.978 [412/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:54.978 [413/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:55.236 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.236 [415/705] Linking target lib/librte_ipsec.so.24.0 00:02:55.236 [416/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:55.236 [417/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:55.236 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:55.494 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:55.494 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:55.494 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:55.494 [422/705] Linking static target lib/librte_fib.a 00:02:55.494 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:55.752 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:55.752 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:55.752 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.752 [427/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.752 [428/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:55.752 [429/705] Linking static target lib/librte_pdcp.a 00:02:55.752 [430/705] Linking target lib/librte_fib.so.24.0 00:02:56.019 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.019 [432/705] Linking target lib/librte_pdcp.so.24.0 00:02:56.019 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.281 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:56.281 [435/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:56.281 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.281 [437/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.281 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:56.540 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:56.540 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:56.540 [441/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:56.540 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:56.540 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:56.540 [444/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:56.540 [445/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:56.540 [446/705] Linking static target lib/librte_port.a 00:02:56.798 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:56.798 [448/705] Linking static target lib/librte_pdump.a 00:02:56.798 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:56.798 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:56.798 [451/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.798 [452/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:57.057 [453/705] Linking target lib/librte_port.so.24.0 00:02:57.057 [454/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.057 [455/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:57.057 [456/705] Linking target lib/librte_pdump.so.24.0 00:02:57.057 [457/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.057 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.315 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.315 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:57.315 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:57.315 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:57.315 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:57.574 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:57.574 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:57.574 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:57.574 [467/705] Linking static target lib/librte_table.a 00:02:57.574 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:57.832 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:57.832 [470/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:57.832 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:58.091 [472/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.091 [473/705] Linking target lib/librte_table.so.24.0 00:02:58.091 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:58.091 [475/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:58.091 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:58.349 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:58.349 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:58.349 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:58.349 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:58.349 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:58.608 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:58.608 [483/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:58.608 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:58.608 [485/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:58.608 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:58.608 [487/705] Linking static target lib/librte_graph.a 00:02:58.866 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:58.866 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.866 [490/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:59.125 [491/705] Linking target lib/librte_graph.so.24.0 00:02:59.125 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:59.125 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:59.125 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:59.125 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:59.384 [496/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:59.384 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:59.384 [498/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:59.384 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.384 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:59.384 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:59.384 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:59.642 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:59.642 [504/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:59.642 [505/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:59.642 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.642 [507/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:59.642 [508/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.642 [509/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:59.642 [510/705] Linking static target lib/librte_node.a 00:02:59.900 [511/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:59.900 [512/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.900 [513/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.900 [514/705] Linking target lib/librte_node.so.24.0 00:02:59.900 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:59.900 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:59.900 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.158 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.158 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:00.158 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.158 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.158 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.158 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:00.158 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:00.158 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.158 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:00.158 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:00.158 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.158 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:00.416 [530/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:00.416 [531/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.416 [532/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.416 [533/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.416 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:00.416 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:00.416 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:00.416 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.674 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.674 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:00.675 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.675 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:00.675 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:00.934 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:00.934 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:01.192 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:01.451 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:01.451 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:01.451 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:01.709 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:01.709 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:01.709 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:01.967 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:01.967 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:01.967 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:01.967 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:01.967 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:02.225 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:02.225 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:02.225 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:02.483 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:02.483 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:02.741 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:02.741 [563/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:02.741 [564/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:02.741 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:02.741 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:03.000 [567/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:03.000 [568/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:03.000 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:03.000 [570/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:03.000 [571/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:03.000 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:03.258 [573/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:03.258 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:03.258 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:03.258 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:03.516 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:03.516 [578/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:03.774 [579/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:03.774 [580/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:03.774 [581/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:03.774 [582/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:03.774 [583/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:03.774 [584/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:03.774 [585/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.032 [586/705] Linking static target drivers/librte_net_i40e.a 00:03:04.032 [587/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:04.032 [588/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.032 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:04.289 [590/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:04.289 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:04.289 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:04.289 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:04.289 [594/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.289 [595/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:04.547 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:04.547 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:04.547 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:04.805 [599/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:04.805 [600/705] Linking static target lib/librte_vhost.a 00:03:04.805 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:04.805 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:04.805 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:04.805 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:04.805 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:04.805 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:05.063 [607/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:05.063 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:05.063 [609/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:05.063 [610/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:05.063 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:05.321 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:05.321 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:05.579 [614/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.579 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:05.579 [616/705] Linking target lib/librte_vhost.so.24.0 00:03:05.579 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:05.579 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:06.150 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:06.150 [620/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:06.150 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:06.150 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:06.150 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:06.151 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:06.409 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:06.409 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:06.409 [627/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:06.409 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:06.409 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:06.409 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:06.409 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:06.409 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:06.667 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:06.668 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:06.668 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:06.668 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:06.668 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:06.926 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:06.926 [639/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.926 [640/705] Linking static target lib/librte_pipeline.a 00:03:06.926 [641/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:06.926 [642/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:06.926 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:07.184 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:07.184 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:07.184 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:07.184 [647/705] Linking target app/dpdk-dumpcap 00:03:07.184 [648/705] Linking target app/dpdk-pdump 00:03:07.184 [649/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:07.184 [650/705] Linking target app/dpdk-graph 00:03:07.442 [651/705] Linking target app/dpdk-test-acl 00:03:07.443 [652/705] Linking target app/dpdk-proc-info 00:03:07.443 [653/705] Linking target app/dpdk-test-cmdline 00:03:07.443 [654/705] Linking target app/dpdk-test-compress-perf 00:03:07.443 [655/705] Linking target app/dpdk-test-crypto-perf 00:03:07.443 [656/705] Linking target app/dpdk-test-dma-perf 00:03:07.701 [657/705] Linking target app/dpdk-test-fib 00:03:07.701 [658/705] Linking target app/dpdk-test-flow-perf 00:03:07.701 [659/705] Linking target app/dpdk-test-eventdev 00:03:07.701 [660/705] Linking target app/dpdk-test-gpudev 00:03:07.701 [661/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:07.701 [662/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:07.959 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:07.959 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:07.959 [665/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:07.959 [666/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:07.959 [667/705] Linking target app/dpdk-test-mldev 00:03:08.216 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:08.216 [669/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:08.216 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:08.216 [671/705] Linking target app/dpdk-test-bbdev 00:03:08.473 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:08.473 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:08.473 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:08.473 [675/705] Linking target app/dpdk-test-pipeline 00:03:08.473 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:08.732 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:08.732 [678/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.732 [679/705] Linking target lib/librte_pipeline.so.24.0 00:03:08.732 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:08.732 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:08.732 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:08.990 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.990 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.990 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:08.990 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:09.248 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:09.248 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:09.506 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:09.506 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:09.506 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:09.796 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:09.796 [693/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.796 [694/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.796 [695/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:10.072 [696/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:10.072 [697/705] Linking target app/dpdk-test-sad 00:03:10.072 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:10.072 [699/705] Linking target app/dpdk-test-regex 00:03:10.330 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:10.330 [701/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.330 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.588 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:10.588 [704/705] Linking target app/dpdk-test-security-perf 00:03:10.845 [705/705] Linking target app/dpdk-testpmd 00:03:10.845 03:03:14 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:10.845 03:03:14 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.845 03:03:14 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.845 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.845 [0/1] Installing files. 00:03:11.107 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.110 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.111 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.112 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.113 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.113 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.113 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.114 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.376 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.376 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.376 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.376 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:11.376 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.376 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.376 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.376 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.376 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.377 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.378 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.379 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.380 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.380 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:11.380 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:11.380 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:11.380 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.380 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:11.380 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.380 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:11.380 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.380 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:11.380 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.380 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:11.380 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.380 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:11.380 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.380 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:11.380 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.380 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:11.380 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.380 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:11.380 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.380 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:11.380 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.380 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:11.380 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.380 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:11.380 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.380 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:11.380 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.380 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:11.380 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.381 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:11.381 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.381 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:11.381 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.381 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:11.381 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.381 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:11.381 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.381 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:11.381 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.381 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:11.381 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.381 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:11.381 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.381 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:11.381 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.381 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:11.381 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.381 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:11.381 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.381 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:11.381 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.381 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:11.381 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.381 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:11.381 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:11.381 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:11.381 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.381 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:11.381 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.381 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:11.381 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.381 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:11.381 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.381 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:11.381 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.381 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:11.381 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.381 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:11.381 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.381 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:11.381 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.381 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:11.381 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.381 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:11.381 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.381 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:11.381 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.381 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:11.381 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.381 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:11.381 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:11.381 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:11.381 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.381 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:11.381 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.381 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:11.381 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.381 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:11.381 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.381 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:11.381 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:11.381 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:11.381 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:11.381 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:11.381 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:11.381 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:11.381 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:11.381 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:11.381 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:11.381 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:11.381 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:11.381 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:11.381 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.381 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:11.381 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.381 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:11.381 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.381 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:11.381 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:11.381 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:11.381 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.381 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:11.381 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.381 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:11.381 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.381 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:11.381 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.381 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:11.381 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.381 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:11.381 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.381 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:11.381 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.382 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:11.382 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:11.382 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:11.382 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:11.382 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:11.382 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:11.382 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:11.382 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:11.382 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:11.382 03:03:14 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:11.382 03:03:14 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.382 00:03:11.382 real 0m35.588s 00:03:11.382 user 4m10.285s 00:03:11.382 sys 0m34.872s 00:03:11.382 03:03:14 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:11.382 03:03:14 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:11.382 ************************************ 00:03:11.382 END TEST build_native_dpdk 00:03:11.382 ************************************ 00:03:11.382 03:03:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.382 03:03:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.382 03:03:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.382 03:03:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.382 03:03:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.382 03:03:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.382 03:03:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.382 03:03:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:11.641 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.641 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.641 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.641 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:11.899 Using 'verbs' RDMA provider 00:03:22.802 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:32.805 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:33.375 Creating mk/config.mk...done. 00:03:33.375 Creating mk/cc.flags.mk...done. 00:03:33.375 Type 'make' to build. 00:03:33.375 03:03:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:33.375 03:03:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:33.375 03:03:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:33.375 03:03:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.375 ************************************ 00:03:33.375 START TEST make 00:03:33.375 ************************************ 00:03:33.375 03:03:36 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:33.634 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:33.634 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:33.634 meson setup builddir \ 00:03:33.634 -Dwith-libaio=enabled \ 00:03:33.634 -Dwith-liburing=enabled \ 00:03:33.634 -Dwith-libvfn=disabled \ 00:03:33.634 -Dwith-spdk=false && \ 00:03:33.634 meson compile -C builddir && \ 00:03:33.634 cd -) 00:03:33.634 make[1]: Nothing to be done for 'all'. 00:03:35.536 The Meson build system 00:03:35.536 Version: 1.5.0 00:03:35.536 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:35.536 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.536 Build type: native build 00:03:35.536 Project name: xnvme 00:03:35.536 Project version: 0.7.3 00:03:35.536 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:35.536 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:35.536 Host machine cpu family: x86_64 00:03:35.536 Host machine cpu: x86_64 00:03:35.536 Message: host_machine.system: linux 00:03:35.536 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:35.536 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:35.536 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:35.536 Run-time dependency threads found: YES 00:03:35.536 Has header "setupapi.h" : NO 00:03:35.536 Has header "linux/blkzoned.h" : YES 00:03:35.536 Has header "linux/blkzoned.h" : YES (cached) 00:03:35.536 Has header "libaio.h" : YES 00:03:35.536 Library aio found: YES 00:03:35.536 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:35.536 Run-time dependency liburing found: YES 2.2 00:03:35.536 Dependency libvfn skipped: feature with-libvfn disabled 00:03:35.536 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.536 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.536 Configuring xnvme_config.h using configuration 00:03:35.536 Configuring xnvme.spec using configuration 00:03:35.536 Run-time dependency bash-completion found: YES 2.11 00:03:35.536 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:35.536 Program cp found: YES (/usr/bin/cp) 00:03:35.536 Has header "winsock2.h" : NO 00:03:35.536 Has header "dbghelp.h" : NO 00:03:35.536 Library rpcrt4 found: NO 00:03:35.536 Library rt found: YES 00:03:35.536 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:35.536 Found CMake: /usr/bin/cmake (3.27.7) 00:03:35.536 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:35.536 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:35.536 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:35.536 Build targets in project: 32 00:03:35.536 00:03:35.536 xnvme 0.7.3 00:03:35.536 00:03:35.536 User defined options 00:03:35.536 with-libaio : enabled 00:03:35.536 with-liburing: enabled 00:03:35.536 with-libvfn : disabled 00:03:35.536 with-spdk : false 00:03:35.536 00:03:35.536 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:36.103 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:36.103 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:36.103 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:36.103 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:36.103 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:36.103 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:36.103 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:36.103 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.103 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:36.103 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.103 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.103 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:36.103 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:36.103 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:36.103 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.103 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:36.361 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:36.361 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:36.361 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:36.361 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:36.361 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:36.361 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:36.361 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:36.361 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:36.361 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:36.361 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:36.361 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:36.361 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:36.361 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:36.361 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:36.361 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:36.361 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:36.361 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:36.361 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:36.361 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:36.361 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:36.361 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:36.361 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:36.361 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:36.361 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:36.361 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:36.361 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:36.361 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:36.361 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:36.361 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:36.361 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:36.361 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:36.361 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:36.361 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:36.361 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:36.361 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:36.361 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:36.362 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:36.619 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:36.619 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:36.619 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:36.619 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:36.619 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:36.619 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:36.619 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:36.619 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:36.619 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:36.619 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:36.619 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:36.619 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:36.619 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:36.619 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:36.619 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:36.619 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:36.619 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:36.619 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:36.619 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:36.619 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:36.619 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.620 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.876 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:36.876 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:36.876 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:36.876 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:36.876 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.876 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.876 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:36.876 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:36.876 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:36.876 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:36.876 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:36.876 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:36.876 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:36.876 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:36.876 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:36.876 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:36.876 [91/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:36.876 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:36.876 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:36.877 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:36.877 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:36.877 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:37.135 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:37.135 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:37.135 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:37.135 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:37.135 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:37.135 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:37.135 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:37.135 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:37.135 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:37.135 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:37.135 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:37.135 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:37.135 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:37.135 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:37.135 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:37.135 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:37.135 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:37.135 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:37.135 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:37.135 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:37.135 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:37.135 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:37.135 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:37.135 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:37.135 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:37.135 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:37.135 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:37.135 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:37.135 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:37.135 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:37.135 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:37.135 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:37.135 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:37.135 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:37.135 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:37.135 [132/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:37.393 [133/203] Linking target lib/libxnvme.so 00:03:37.393 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:37.393 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:37.393 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:37.393 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:37.393 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:37.393 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:37.393 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:37.393 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:37.393 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:37.393 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:37.393 [144/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:37.393 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:37.393 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:37.393 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:37.651 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:37.651 [149/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:37.651 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:37.651 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:37.651 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:37.651 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:37.651 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:37.651 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:37.651 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:37.651 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:37.651 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:37.651 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:37.651 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:37.651 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:37.651 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:37.651 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:37.651 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:37.651 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:37.651 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:37.908 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:37.908 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:37.908 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:37.908 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:37.908 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:37.908 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:37.908 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:37.908 [174/203] Linking static target lib/libxnvme.a 00:03:38.166 [175/203] Linking target tests/xnvme_tests_cli 00:03:38.166 [176/203] Linking target tests/xnvme_tests_xnvme_file 00:03:38.166 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:38.166 [178/203] Linking target tests/xnvme_tests_enum 00:03:38.166 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:38.166 [180/203] Linking target tests/xnvme_tests_znd_append 00:03:38.166 [181/203] Linking target tests/xnvme_tests_lblk 00:03:38.166 [182/203] Linking target tests/xnvme_tests_ioworker 00:03:38.166 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:38.166 [184/203] Linking target tests/xnvme_tests_znd_state 00:03:38.166 [185/203] Linking target tests/xnvme_tests_scc 00:03:38.166 [186/203] Linking target tests/xnvme_tests_buf 00:03:38.166 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:38.166 [188/203] Linking target tools/xnvme 00:03:38.166 [189/203] Linking target tools/xdd 00:03:38.166 [190/203] Linking target tests/xnvme_tests_map 00:03:38.166 [191/203] Linking target tests/xnvme_tests_kvs 00:03:38.166 [192/203] Linking target tools/kvs 00:03:38.166 [193/203] Linking target tools/lblk 00:03:38.166 [194/203] Linking target examples/xnvme_enum 00:03:38.166 [195/203] Linking target examples/xnvme_single_async 00:03:38.166 [196/203] Linking target examples/xnvme_dev 00:03:38.166 [197/203] Linking target examples/xnvme_hello 00:03:38.166 [198/203] Linking target examples/xnvme_single_sync 00:03:38.166 [199/203] Linking target tools/xnvme_file 00:03:38.166 [200/203] Linking target tools/zoned 00:03:38.166 [201/203] Linking target examples/zoned_io_sync 00:03:38.166 [202/203] Linking target examples/zoned_io_async 00:03:38.166 [203/203] Linking target examples/xnvme_io_async 00:03:38.166 INFO: autodetecting backend as ninja 00:03:38.166 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:38.166 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:10.296 CC lib/ut_mock/mock.o 00:04:10.296 CC lib/log/log.o 00:04:10.296 CC lib/log/log_flags.o 00:04:10.296 CC lib/log/log_deprecated.o 00:04:10.296 CC lib/ut/ut.o 00:04:10.296 LIB libspdk_log.a 00:04:10.296 LIB libspdk_ut_mock.a 00:04:10.296 LIB libspdk_ut.a 00:04:10.296 SO libspdk_ut_mock.so.6.0 00:04:10.296 SO libspdk_ut.so.2.0 00:04:10.296 SO libspdk_log.so.7.0 00:04:10.296 SYMLINK libspdk_ut_mock.so 00:04:10.296 SYMLINK libspdk_ut.so 00:04:10.296 SYMLINK libspdk_log.so 00:04:10.296 CC lib/util/base64.o 00:04:10.296 CXX lib/trace_parser/trace.o 00:04:10.296 CC lib/util/bit_array.o 00:04:10.296 CC lib/util/cpuset.o 00:04:10.296 CC lib/util/crc16.o 00:04:10.296 CC lib/util/crc32c.o 00:04:10.296 CC lib/util/crc32.o 00:04:10.296 CC lib/dma/dma.o 00:04:10.296 CC lib/ioat/ioat.o 00:04:10.296 CC lib/vfio_user/host/vfio_user_pci.o 00:04:10.296 CC lib/vfio_user/host/vfio_user.o 00:04:10.296 CC lib/util/crc32_ieee.o 00:04:10.296 CC lib/util/crc64.o 00:04:10.296 CC lib/util/dif.o 00:04:10.296 CC lib/util/fd.o 00:04:10.296 LIB libspdk_dma.a 00:04:10.296 CC lib/util/fd_group.o 00:04:10.296 SO libspdk_dma.so.5.0 00:04:10.296 CC lib/util/file.o 00:04:10.297 CC lib/util/hexlify.o 00:04:10.297 SYMLINK libspdk_dma.so 00:04:10.297 CC lib/util/iov.o 00:04:10.297 LIB libspdk_ioat.a 00:04:10.297 CC lib/util/math.o 00:04:10.297 CC lib/util/net.o 00:04:10.297 SO libspdk_ioat.so.7.0 00:04:10.297 LIB libspdk_vfio_user.a 00:04:10.297 CC lib/util/pipe.o 00:04:10.297 SYMLINK libspdk_ioat.so 00:04:10.297 CC lib/util/strerror_tls.o 00:04:10.297 CC lib/util/string.o 00:04:10.297 SO libspdk_vfio_user.so.5.0 00:04:10.297 SYMLINK libspdk_vfio_user.so 00:04:10.297 CC lib/util/uuid.o 00:04:10.297 CC lib/util/xor.o 00:04:10.297 CC lib/util/zipf.o 00:04:10.297 CC lib/util/md5.o 00:04:10.297 LIB libspdk_util.a 00:04:10.297 SO libspdk_util.so.10.0 00:04:10.297 LIB libspdk_trace_parser.a 00:04:10.297 SYMLINK libspdk_util.so 00:04:10.297 SO libspdk_trace_parser.so.6.0 00:04:10.297 SYMLINK libspdk_trace_parser.so 00:04:10.297 CC lib/rdma_provider/common.o 00:04:10.297 CC lib/vmd/vmd.o 00:04:10.297 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:10.297 CC lib/vmd/led.o 00:04:10.297 CC lib/rdma_utils/rdma_utils.o 00:04:10.297 CC lib/conf/conf.o 00:04:10.297 CC lib/idxd/idxd.o 00:04:10.297 CC lib/env_dpdk/env.o 00:04:10.297 CC lib/idxd/idxd_user.o 00:04:10.297 CC lib/json/json_parse.o 00:04:10.297 CC lib/json/json_util.o 00:04:10.297 CC lib/json/json_write.o 00:04:10.297 LIB libspdk_rdma_provider.a 00:04:10.297 SO libspdk_rdma_provider.so.6.0 00:04:10.297 LIB libspdk_conf.a 00:04:10.297 SO libspdk_conf.so.6.0 00:04:10.297 CC lib/env_dpdk/memory.o 00:04:10.297 SYMLINK libspdk_rdma_provider.so 00:04:10.297 CC lib/env_dpdk/pci.o 00:04:10.297 CC lib/idxd/idxd_kernel.o 00:04:10.297 LIB libspdk_rdma_utils.a 00:04:10.297 SYMLINK libspdk_conf.so 00:04:10.297 CC lib/env_dpdk/init.o 00:04:10.297 SO libspdk_rdma_utils.so.1.0 00:04:10.297 SYMLINK libspdk_rdma_utils.so 00:04:10.297 CC lib/env_dpdk/threads.o 00:04:10.297 CC lib/env_dpdk/pci_ioat.o 00:04:10.297 CC lib/env_dpdk/pci_virtio.o 00:04:10.297 LIB libspdk_json.a 00:04:10.297 SO libspdk_json.so.6.0 00:04:10.297 CC lib/env_dpdk/pci_vmd.o 00:04:10.297 CC lib/env_dpdk/pci_idxd.o 00:04:10.297 CC lib/env_dpdk/pci_event.o 00:04:10.297 SYMLINK libspdk_json.so 00:04:10.297 CC lib/env_dpdk/sigbus_handler.o 00:04:10.297 CC lib/env_dpdk/pci_dpdk.o 00:04:10.297 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:10.297 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:10.297 LIB libspdk_idxd.a 00:04:10.297 SO libspdk_idxd.so.12.1 00:04:10.297 LIB libspdk_vmd.a 00:04:10.297 SO libspdk_vmd.so.6.0 00:04:10.297 SYMLINK libspdk_idxd.so 00:04:10.297 CC lib/jsonrpc/jsonrpc_server.o 00:04:10.297 CC lib/jsonrpc/jsonrpc_client.o 00:04:10.297 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:10.297 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:10.297 SYMLINK libspdk_vmd.so 00:04:10.297 LIB libspdk_jsonrpc.a 00:04:10.297 SO libspdk_jsonrpc.so.6.0 00:04:10.297 SYMLINK libspdk_jsonrpc.so 00:04:10.297 CC lib/rpc/rpc.o 00:04:10.297 LIB libspdk_env_dpdk.a 00:04:10.297 SO libspdk_env_dpdk.so.15.0 00:04:10.297 LIB libspdk_rpc.a 00:04:10.297 SO libspdk_rpc.so.6.0 00:04:10.297 SYMLINK libspdk_rpc.so 00:04:10.297 SYMLINK libspdk_env_dpdk.so 00:04:10.297 CC lib/trace/trace.o 00:04:10.297 CC lib/trace/trace_flags.o 00:04:10.297 CC lib/trace/trace_rpc.o 00:04:10.297 CC lib/notify/notify_rpc.o 00:04:10.297 CC lib/notify/notify.o 00:04:10.297 CC lib/keyring/keyring.o 00:04:10.297 CC lib/keyring/keyring_rpc.o 00:04:10.297 LIB libspdk_notify.a 00:04:10.297 SO libspdk_notify.so.6.0 00:04:10.297 SYMLINK libspdk_notify.so 00:04:10.297 LIB libspdk_keyring.a 00:04:10.297 LIB libspdk_trace.a 00:04:10.297 SO libspdk_keyring.so.2.0 00:04:10.297 SO libspdk_trace.so.11.0 00:04:10.297 SYMLINK libspdk_keyring.so 00:04:10.297 SYMLINK libspdk_trace.so 00:04:10.297 CC lib/thread/thread.o 00:04:10.297 CC lib/thread/iobuf.o 00:04:10.297 CC lib/sock/sock.o 00:04:10.297 CC lib/sock/sock_rpc.o 00:04:10.556 LIB libspdk_sock.a 00:04:10.815 SO libspdk_sock.so.10.0 00:04:10.815 SYMLINK libspdk_sock.so 00:04:11.074 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:11.074 CC lib/nvme/nvme_ctrlr.o 00:04:11.074 CC lib/nvme/nvme_fabric.o 00:04:11.074 CC lib/nvme/nvme_ns_cmd.o 00:04:11.074 CC lib/nvme/nvme_pcie_common.o 00:04:11.074 CC lib/nvme/nvme_ns.o 00:04:11.074 CC lib/nvme/nvme_pcie.o 00:04:11.074 CC lib/nvme/nvme_qpair.o 00:04:11.074 CC lib/nvme/nvme.o 00:04:11.640 CC lib/nvme/nvme_quirks.o 00:04:11.640 CC lib/nvme/nvme_transport.o 00:04:11.640 CC lib/nvme/nvme_discovery.o 00:04:11.640 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:11.640 LIB libspdk_thread.a 00:04:11.899 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:11.899 CC lib/nvme/nvme_tcp.o 00:04:11.899 SO libspdk_thread.so.10.1 00:04:11.899 CC lib/nvme/nvme_opal.o 00:04:11.899 SYMLINK libspdk_thread.so 00:04:11.899 CC lib/accel/accel.o 00:04:12.159 CC lib/blob/blobstore.o 00:04:12.159 CC lib/accel/accel_rpc.o 00:04:12.159 CC lib/init/json_config.o 00:04:12.159 CC lib/init/subsystem.o 00:04:12.159 CC lib/virtio/virtio.o 00:04:12.420 CC lib/nvme/nvme_io_msg.o 00:04:12.420 CC lib/nvme/nvme_poll_group.o 00:04:12.420 CC lib/nvme/nvme_zns.o 00:04:12.420 CC lib/fsdev/fsdev.o 00:04:12.420 CC lib/init/subsystem_rpc.o 00:04:12.420 CC lib/virtio/virtio_vhost_user.o 00:04:12.420 CC lib/init/rpc.o 00:04:12.681 LIB libspdk_init.a 00:04:12.681 SO libspdk_init.so.6.0 00:04:12.681 CC lib/virtio/virtio_vfio_user.o 00:04:12.681 SYMLINK libspdk_init.so 00:04:12.681 CC lib/fsdev/fsdev_io.o 00:04:12.943 CC lib/nvme/nvme_stubs.o 00:04:12.943 CC lib/nvme/nvme_auth.o 00:04:12.943 CC lib/virtio/virtio_pci.o 00:04:12.943 CC lib/fsdev/fsdev_rpc.o 00:04:12.943 CC lib/event/app.o 00:04:13.205 CC lib/event/reactor.o 00:04:13.205 CC lib/accel/accel_sw.o 00:04:13.205 CC lib/blob/request.o 00:04:13.205 LIB libspdk_fsdev.a 00:04:13.205 LIB libspdk_virtio.a 00:04:13.205 SO libspdk_fsdev.so.1.0 00:04:13.205 SO libspdk_virtio.so.7.0 00:04:13.205 CC lib/nvme/nvme_cuse.o 00:04:13.205 SYMLINK libspdk_fsdev.so 00:04:13.205 CC lib/nvme/nvme_rdma.o 00:04:13.205 SYMLINK libspdk_virtio.so 00:04:13.205 CC lib/event/log_rpc.o 00:04:13.205 CC lib/event/app_rpc.o 00:04:13.466 CC lib/event/scheduler_static.o 00:04:13.466 LIB libspdk_accel.a 00:04:13.466 SO libspdk_accel.so.16.0 00:04:13.466 CC lib/blob/zeroes.o 00:04:13.466 CC lib/blob/blob_bs_dev.o 00:04:13.466 SYMLINK libspdk_accel.so 00:04:13.466 LIB libspdk_event.a 00:04:13.727 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:13.727 SO libspdk_event.so.14.0 00:04:13.727 CC lib/bdev/bdev.o 00:04:13.727 CC lib/bdev/bdev_rpc.o 00:04:13.727 CC lib/bdev/part.o 00:04:13.727 CC lib/bdev/bdev_zone.o 00:04:13.727 SYMLINK libspdk_event.so 00:04:13.727 CC lib/bdev/scsi_nvme.o 00:04:13.988 LIB libspdk_fuse_dispatcher.a 00:04:14.249 SO libspdk_fuse_dispatcher.so.1.0 00:04:14.249 SYMLINK libspdk_fuse_dispatcher.so 00:04:14.510 LIB libspdk_nvme.a 00:04:14.769 SO libspdk_nvme.so.14.0 00:04:15.030 SYMLINK libspdk_nvme.so 00:04:15.290 LIB libspdk_blob.a 00:04:15.290 SO libspdk_blob.so.11.0 00:04:15.549 SYMLINK libspdk_blob.so 00:04:15.549 CC lib/lvol/lvol.o 00:04:15.549 CC lib/blobfs/tree.o 00:04:15.549 CC lib/blobfs/blobfs.o 00:04:16.121 LIB libspdk_bdev.a 00:04:16.121 SO libspdk_bdev.so.16.0 00:04:16.383 SYMLINK libspdk_bdev.so 00:04:16.383 LIB libspdk_lvol.a 00:04:16.383 SO libspdk_lvol.so.10.0 00:04:16.383 CC lib/nbd/nbd_rpc.o 00:04:16.383 CC lib/nbd/nbd.o 00:04:16.383 CC lib/nvmf/ctrlr.o 00:04:16.383 CC lib/nvmf/ctrlr_discovery.o 00:04:16.383 CC lib/nvmf/ctrlr_bdev.o 00:04:16.383 CC lib/scsi/dev.o 00:04:16.383 CC lib/ublk/ublk.o 00:04:16.383 CC lib/ftl/ftl_core.o 00:04:16.383 SYMLINK libspdk_lvol.so 00:04:16.383 CC lib/ftl/ftl_init.o 00:04:16.383 LIB libspdk_blobfs.a 00:04:16.642 SO libspdk_blobfs.so.10.0 00:04:16.642 CC lib/scsi/lun.o 00:04:16.642 SYMLINK libspdk_blobfs.so 00:04:16.642 CC lib/scsi/port.o 00:04:16.642 CC lib/ftl/ftl_layout.o 00:04:16.642 CC lib/nvmf/subsystem.o 00:04:16.642 CC lib/nvmf/nvmf.o 00:04:16.642 CC lib/nvmf/nvmf_rpc.o 00:04:16.642 LIB libspdk_nbd.a 00:04:16.642 SO libspdk_nbd.so.7.0 00:04:16.924 CC lib/ftl/ftl_debug.o 00:04:16.924 SYMLINK libspdk_nbd.so 00:04:16.924 CC lib/scsi/scsi.o 00:04:16.924 CC lib/nvmf/transport.o 00:04:16.924 CC lib/nvmf/tcp.o 00:04:16.924 CC lib/ftl/ftl_io.o 00:04:16.924 CC lib/scsi/scsi_bdev.o 00:04:17.198 CC lib/ublk/ublk_rpc.o 00:04:17.198 CC lib/scsi/scsi_pr.o 00:04:17.198 CC lib/ftl/ftl_sb.o 00:04:17.198 LIB libspdk_ublk.a 00:04:17.198 SO libspdk_ublk.so.3.0 00:04:17.198 CC lib/ftl/ftl_l2p.o 00:04:17.198 SYMLINK libspdk_ublk.so 00:04:17.198 CC lib/scsi/scsi_rpc.o 00:04:17.456 CC lib/scsi/task.o 00:04:17.456 CC lib/ftl/ftl_l2p_flat.o 00:04:17.456 CC lib/ftl/ftl_nv_cache.o 00:04:17.456 CC lib/ftl/ftl_band.o 00:04:17.456 CC lib/ftl/ftl_band_ops.o 00:04:17.456 CC lib/ftl/ftl_writer.o 00:04:17.456 CC lib/ftl/ftl_rq.o 00:04:17.456 LIB libspdk_scsi.a 00:04:17.456 CC lib/ftl/ftl_reloc.o 00:04:17.456 SO libspdk_scsi.so.9.0 00:04:17.715 CC lib/ftl/ftl_l2p_cache.o 00:04:17.715 SYMLINK libspdk_scsi.so 00:04:17.715 CC lib/ftl/ftl_p2l.o 00:04:17.715 CC lib/ftl/ftl_p2l_log.o 00:04:17.715 CC lib/iscsi/conn.o 00:04:17.715 CC lib/vhost/vhost.o 00:04:17.715 CC lib/vhost/vhost_rpc.o 00:04:17.974 CC lib/vhost/vhost_scsi.o 00:04:17.974 CC lib/vhost/vhost_blk.o 00:04:17.974 CC lib/vhost/rte_vhost_user.o 00:04:17.974 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.234 CC lib/iscsi/init_grp.o 00:04:18.234 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.234 CC lib/iscsi/iscsi.o 00:04:18.234 CC lib/nvmf/stubs.o 00:04:18.234 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.234 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:18.234 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:18.492 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:18.492 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:18.492 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:18.492 CC lib/nvmf/mdns_server.o 00:04:18.492 CC lib/nvmf/rdma.o 00:04:18.492 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:18.492 CC lib/iscsi/param.o 00:04:18.492 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:18.751 CC lib/nvmf/auth.o 00:04:18.751 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:18.751 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:18.751 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:18.751 CC lib/ftl/utils/ftl_conf.o 00:04:18.751 CC lib/iscsi/portal_grp.o 00:04:18.751 CC lib/ftl/utils/ftl_md.o 00:04:18.751 LIB libspdk_vhost.a 00:04:19.010 CC lib/ftl/utils/ftl_mempool.o 00:04:19.010 SO libspdk_vhost.so.8.0 00:04:19.010 CC lib/iscsi/tgt_node.o 00:04:19.010 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.010 SYMLINK libspdk_vhost.so 00:04:19.010 CC lib/ftl/utils/ftl_property.o 00:04:19.010 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.010 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.010 CC lib/iscsi/iscsi_subsystem.o 00:04:19.010 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:19.269 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:19.269 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:19.269 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:19.269 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:19.269 CC lib/iscsi/iscsi_rpc.o 00:04:19.269 CC lib/iscsi/task.o 00:04:19.269 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:19.269 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:19.269 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:19.269 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:19.529 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:19.529 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:19.529 CC lib/ftl/base/ftl_base_dev.o 00:04:19.529 CC lib/ftl/base/ftl_base_bdev.o 00:04:19.529 CC lib/ftl/ftl_trace.o 00:04:19.788 LIB libspdk_iscsi.a 00:04:19.788 LIB libspdk_ftl.a 00:04:19.788 SO libspdk_iscsi.so.8.0 00:04:19.788 SYMLINK libspdk_iscsi.so 00:04:19.788 SO libspdk_ftl.so.9.0 00:04:20.047 SYMLINK libspdk_ftl.so 00:04:20.305 LIB libspdk_nvmf.a 00:04:20.305 SO libspdk_nvmf.so.19.0 00:04:20.565 SYMLINK libspdk_nvmf.so 00:04:20.823 CC module/env_dpdk/env_dpdk_rpc.o 00:04:20.823 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:20.824 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:20.824 CC module/fsdev/aio/fsdev_aio.o 00:04:20.824 CC module/sock/posix/posix.o 00:04:20.824 CC module/keyring/file/keyring.o 00:04:20.824 CC module/blob/bdev/blob_bdev.o 00:04:20.824 CC module/scheduler/gscheduler/gscheduler.o 00:04:20.824 CC module/accel/error/accel_error.o 00:04:20.824 CC module/accel/ioat/accel_ioat.o 00:04:20.824 LIB libspdk_env_dpdk_rpc.a 00:04:21.082 SO libspdk_env_dpdk_rpc.so.6.0 00:04:21.082 LIB libspdk_scheduler_dpdk_governor.a 00:04:21.082 SYMLINK libspdk_env_dpdk_rpc.so 00:04:21.082 CC module/accel/error/accel_error_rpc.o 00:04:21.082 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:21.082 LIB libspdk_scheduler_gscheduler.a 00:04:21.082 SO libspdk_scheduler_gscheduler.so.4.0 00:04:21.082 CC module/accel/ioat/accel_ioat_rpc.o 00:04:21.082 CC module/keyring/file/keyring_rpc.o 00:04:21.082 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:21.082 LIB libspdk_scheduler_dynamic.a 00:04:21.082 SYMLINK libspdk_scheduler_gscheduler.so 00:04:21.082 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:21.082 SO libspdk_scheduler_dynamic.so.4.0 00:04:21.082 LIB libspdk_accel_ioat.a 00:04:21.082 LIB libspdk_accel_error.a 00:04:21.082 SYMLINK libspdk_scheduler_dynamic.so 00:04:21.082 SO libspdk_accel_ioat.so.6.0 00:04:21.082 SO libspdk_accel_error.so.2.0 00:04:21.082 LIB libspdk_blob_bdev.a 00:04:21.082 LIB libspdk_keyring_file.a 00:04:21.082 SO libspdk_blob_bdev.so.11.0 00:04:21.082 CC module/accel/dsa/accel_dsa.o 00:04:21.082 SO libspdk_keyring_file.so.2.0 00:04:21.341 SYMLINK libspdk_accel_ioat.so 00:04:21.341 CC module/keyring/linux/keyring.o 00:04:21.341 CC module/accel/dsa/accel_dsa_rpc.o 00:04:21.341 SYMLINK libspdk_accel_error.so 00:04:21.341 CC module/fsdev/aio/linux_aio_mgr.o 00:04:21.341 SYMLINK libspdk_blob_bdev.so 00:04:21.341 CC module/keyring/linux/keyring_rpc.o 00:04:21.341 SYMLINK libspdk_keyring_file.so 00:04:21.341 CC module/accel/iaa/accel_iaa.o 00:04:21.341 CC module/accel/iaa/accel_iaa_rpc.o 00:04:21.341 LIB libspdk_keyring_linux.a 00:04:21.341 SO libspdk_keyring_linux.so.1.0 00:04:21.341 SYMLINK libspdk_keyring_linux.so 00:04:21.341 CC module/bdev/delay/vbdev_delay.o 00:04:21.341 LIB libspdk_accel_dsa.a 00:04:21.341 LIB libspdk_accel_iaa.a 00:04:21.600 CC module/bdev/error/vbdev_error.o 00:04:21.600 CC module/blobfs/bdev/blobfs_bdev.o 00:04:21.600 SO libspdk_accel_iaa.so.3.0 00:04:21.600 SO libspdk_accel_dsa.so.5.0 00:04:21.600 LIB libspdk_fsdev_aio.a 00:04:21.600 CC module/bdev/lvol/vbdev_lvol.o 00:04:21.600 CC module/bdev/gpt/gpt.o 00:04:21.600 CC module/bdev/malloc/bdev_malloc.o 00:04:21.600 SYMLINK libspdk_accel_iaa.so 00:04:21.601 CC module/bdev/gpt/vbdev_gpt.o 00:04:21.601 SYMLINK libspdk_accel_dsa.so 00:04:21.601 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:21.601 SO libspdk_fsdev_aio.so.1.0 00:04:21.601 LIB libspdk_sock_posix.a 00:04:21.601 SYMLINK libspdk_fsdev_aio.so 00:04:21.601 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:21.601 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:21.601 SO libspdk_sock_posix.so.6.0 00:04:21.601 SYMLINK libspdk_sock_posix.so 00:04:21.601 CC module/bdev/error/vbdev_error_rpc.o 00:04:21.601 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:21.860 LIB libspdk_blobfs_bdev.a 00:04:21.860 CC module/bdev/null/bdev_null.o 00:04:21.860 CC module/bdev/nvme/bdev_nvme.o 00:04:21.860 LIB libspdk_bdev_delay.a 00:04:21.860 LIB libspdk_bdev_gpt.a 00:04:21.860 SO libspdk_blobfs_bdev.so.6.0 00:04:21.860 SO libspdk_bdev_delay.so.6.0 00:04:21.860 SO libspdk_bdev_gpt.so.6.0 00:04:21.860 LIB libspdk_bdev_error.a 00:04:21.860 SYMLINK libspdk_blobfs_bdev.so 00:04:21.860 SO libspdk_bdev_error.so.6.0 00:04:21.860 CC module/bdev/null/bdev_null_rpc.o 00:04:21.860 SYMLINK libspdk_bdev_gpt.so 00:04:21.860 SYMLINK libspdk_bdev_delay.so 00:04:21.860 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:21.860 CC module/bdev/nvme/nvme_rpc.o 00:04:21.860 LIB libspdk_bdev_malloc.a 00:04:21.860 CC module/bdev/passthru/vbdev_passthru.o 00:04:21.860 SYMLINK libspdk_bdev_error.so 00:04:21.860 SO libspdk_bdev_malloc.so.6.0 00:04:21.860 SYMLINK libspdk_bdev_malloc.so 00:04:22.119 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:22.119 CC module/bdev/nvme/bdev_mdns_client.o 00:04:22.119 CC module/bdev/nvme/vbdev_opal.o 00:04:22.119 LIB libspdk_bdev_null.a 00:04:22.119 SO libspdk_bdev_null.so.6.0 00:04:22.119 CC module/bdev/raid/bdev_raid.o 00:04:22.119 LIB libspdk_bdev_lvol.a 00:04:22.119 CC module/bdev/raid/bdev_raid_rpc.o 00:04:22.119 SO libspdk_bdev_lvol.so.6.0 00:04:22.119 SYMLINK libspdk_bdev_null.so 00:04:22.119 CC module/bdev/raid/bdev_raid_sb.o 00:04:22.119 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:22.119 SYMLINK libspdk_bdev_lvol.so 00:04:22.119 LIB libspdk_bdev_passthru.a 00:04:22.119 SO libspdk_bdev_passthru.so.6.0 00:04:22.119 CC module/bdev/split/vbdev_split.o 00:04:22.377 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:22.377 SYMLINK libspdk_bdev_passthru.so 00:04:22.377 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:22.377 CC module/bdev/raid/raid0.o 00:04:22.377 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:22.377 CC module/bdev/split/vbdev_split_rpc.o 00:04:22.377 CC module/bdev/xnvme/bdev_xnvme.o 00:04:22.377 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:22.377 LIB libspdk_bdev_split.a 00:04:22.377 SO libspdk_bdev_split.so.6.0 00:04:22.377 CC module/bdev/raid/raid1.o 00:04:22.377 CC module/bdev/raid/concat.o 00:04:22.636 SYMLINK libspdk_bdev_split.so 00:04:22.636 CC module/bdev/aio/bdev_aio.o 00:04:22.636 CC module/bdev/aio/bdev_aio_rpc.o 00:04:22.636 LIB libspdk_bdev_xnvme.a 00:04:22.636 SO libspdk_bdev_xnvme.so.3.0 00:04:22.636 LIB libspdk_bdev_zone_block.a 00:04:22.636 SO libspdk_bdev_zone_block.so.6.0 00:04:22.636 CC module/bdev/ftl/bdev_ftl.o 00:04:22.636 SYMLINK libspdk_bdev_xnvme.so 00:04:22.636 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:22.636 CC module/bdev/iscsi/bdev_iscsi.o 00:04:22.636 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:22.636 SYMLINK libspdk_bdev_zone_block.so 00:04:22.895 LIB libspdk_bdev_aio.a 00:04:22.895 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:22.895 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:22.895 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:22.895 SO libspdk_bdev_aio.so.6.0 00:04:22.895 SYMLINK libspdk_bdev_aio.so 00:04:22.895 LIB libspdk_bdev_ftl.a 00:04:22.895 SO libspdk_bdev_ftl.so.6.0 00:04:22.895 LIB libspdk_bdev_iscsi.a 00:04:22.895 SO libspdk_bdev_iscsi.so.6.0 00:04:22.895 SYMLINK libspdk_bdev_ftl.so 00:04:23.154 SYMLINK libspdk_bdev_iscsi.so 00:04:23.154 LIB libspdk_bdev_raid.a 00:04:23.154 SO libspdk_bdev_raid.so.6.0 00:04:23.154 SYMLINK libspdk_bdev_raid.so 00:04:23.154 LIB libspdk_bdev_virtio.a 00:04:23.154 SO libspdk_bdev_virtio.so.6.0 00:04:23.412 SYMLINK libspdk_bdev_virtio.so 00:04:23.671 LIB libspdk_bdev_nvme.a 00:04:23.671 SO libspdk_bdev_nvme.so.7.0 00:04:23.930 SYMLINK libspdk_bdev_nvme.so 00:04:24.188 CC module/event/subsystems/fsdev/fsdev.o 00:04:24.188 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:24.188 CC module/event/subsystems/iobuf/iobuf.o 00:04:24.188 CC module/event/subsystems/vmd/vmd.o 00:04:24.188 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:24.188 CC module/event/subsystems/keyring/keyring.o 00:04:24.188 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:24.188 CC module/event/subsystems/sock/sock.o 00:04:24.188 CC module/event/subsystems/scheduler/scheduler.o 00:04:24.188 LIB libspdk_event_vhost_blk.a 00:04:24.188 LIB libspdk_event_sock.a 00:04:24.188 LIB libspdk_event_fsdev.a 00:04:24.188 LIB libspdk_event_keyring.a 00:04:24.188 SO libspdk_event_vhost_blk.so.3.0 00:04:24.188 LIB libspdk_event_scheduler.a 00:04:24.188 LIB libspdk_event_vmd.a 00:04:24.446 LIB libspdk_event_iobuf.a 00:04:24.446 SO libspdk_event_sock.so.5.0 00:04:24.446 SO libspdk_event_fsdev.so.1.0 00:04:24.446 SO libspdk_event_keyring.so.1.0 00:04:24.446 SO libspdk_event_scheduler.so.4.0 00:04:24.446 SO libspdk_event_vmd.so.6.0 00:04:24.446 SYMLINK libspdk_event_vhost_blk.so 00:04:24.446 SO libspdk_event_iobuf.so.3.0 00:04:24.446 SYMLINK libspdk_event_fsdev.so 00:04:24.446 SYMLINK libspdk_event_sock.so 00:04:24.446 SYMLINK libspdk_event_keyring.so 00:04:24.446 SYMLINK libspdk_event_scheduler.so 00:04:24.446 SYMLINK libspdk_event_vmd.so 00:04:24.446 SYMLINK libspdk_event_iobuf.so 00:04:24.704 CC module/event/subsystems/accel/accel.o 00:04:24.704 LIB libspdk_event_accel.a 00:04:24.704 SO libspdk_event_accel.so.6.0 00:04:24.704 SYMLINK libspdk_event_accel.so 00:04:24.963 CC module/event/subsystems/bdev/bdev.o 00:04:25.221 LIB libspdk_event_bdev.a 00:04:25.221 SO libspdk_event_bdev.so.6.0 00:04:25.221 SYMLINK libspdk_event_bdev.so 00:04:25.480 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:25.480 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:25.480 CC module/event/subsystems/scsi/scsi.o 00:04:25.480 CC module/event/subsystems/nbd/nbd.o 00:04:25.480 CC module/event/subsystems/ublk/ublk.o 00:04:25.480 LIB libspdk_event_scsi.a 00:04:25.480 LIB libspdk_event_ublk.a 00:04:25.480 SO libspdk_event_scsi.so.6.0 00:04:25.480 SO libspdk_event_ublk.so.3.0 00:04:25.480 LIB libspdk_event_nbd.a 00:04:25.480 SYMLINK libspdk_event_scsi.so 00:04:25.743 SYMLINK libspdk_event_ublk.so 00:04:25.743 SO libspdk_event_nbd.so.6.0 00:04:25.743 LIB libspdk_event_nvmf.a 00:04:25.743 SO libspdk_event_nvmf.so.6.0 00:04:25.743 SYMLINK libspdk_event_nbd.so 00:04:25.743 SYMLINK libspdk_event_nvmf.so 00:04:25.743 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.743 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:26.001 LIB libspdk_event_vhost_scsi.a 00:04:26.001 LIB libspdk_event_iscsi.a 00:04:26.001 SO libspdk_event_vhost_scsi.so.3.0 00:04:26.001 SO libspdk_event_iscsi.so.6.0 00:04:26.001 SYMLINK libspdk_event_vhost_scsi.so 00:04:26.001 SYMLINK libspdk_event_iscsi.so 00:04:26.001 SO libspdk.so.6.0 00:04:26.001 SYMLINK libspdk.so 00:04:26.260 CC test/rpc_client/rpc_client_test.o 00:04:26.260 CXX app/trace/trace.o 00:04:26.260 TEST_HEADER include/spdk/accel.h 00:04:26.260 TEST_HEADER include/spdk/accel_module.h 00:04:26.260 TEST_HEADER include/spdk/assert.h 00:04:26.260 TEST_HEADER include/spdk/barrier.h 00:04:26.260 TEST_HEADER include/spdk/base64.h 00:04:26.260 TEST_HEADER include/spdk/bdev.h 00:04:26.260 TEST_HEADER include/spdk/bdev_module.h 00:04:26.260 TEST_HEADER include/spdk/bdev_zone.h 00:04:26.260 TEST_HEADER include/spdk/bit_array.h 00:04:26.260 TEST_HEADER include/spdk/bit_pool.h 00:04:26.260 TEST_HEADER include/spdk/blob_bdev.h 00:04:26.260 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:26.260 TEST_HEADER include/spdk/blobfs.h 00:04:26.260 TEST_HEADER include/spdk/blob.h 00:04:26.260 TEST_HEADER include/spdk/conf.h 00:04:26.260 TEST_HEADER include/spdk/config.h 00:04:26.260 TEST_HEADER include/spdk/cpuset.h 00:04:26.260 TEST_HEADER include/spdk/crc16.h 00:04:26.260 TEST_HEADER include/spdk/crc32.h 00:04:26.260 TEST_HEADER include/spdk/crc64.h 00:04:26.260 TEST_HEADER include/spdk/dif.h 00:04:26.260 TEST_HEADER include/spdk/dma.h 00:04:26.260 TEST_HEADER include/spdk/endian.h 00:04:26.260 TEST_HEADER include/spdk/env_dpdk.h 00:04:26.260 TEST_HEADER include/spdk/env.h 00:04:26.260 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:26.260 TEST_HEADER include/spdk/event.h 00:04:26.260 TEST_HEADER include/spdk/fd_group.h 00:04:26.260 TEST_HEADER include/spdk/fd.h 00:04:26.260 TEST_HEADER include/spdk/file.h 00:04:26.260 TEST_HEADER include/spdk/fsdev.h 00:04:26.260 TEST_HEADER include/spdk/fsdev_module.h 00:04:26.260 TEST_HEADER include/spdk/ftl.h 00:04:26.260 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:26.260 CC test/thread/poller_perf/poller_perf.o 00:04:26.260 TEST_HEADER include/spdk/gpt_spec.h 00:04:26.260 TEST_HEADER include/spdk/hexlify.h 00:04:26.260 TEST_HEADER include/spdk/histogram_data.h 00:04:26.260 TEST_HEADER include/spdk/idxd.h 00:04:26.260 TEST_HEADER include/spdk/idxd_spec.h 00:04:26.260 CC examples/ioat/perf/perf.o 00:04:26.260 TEST_HEADER include/spdk/init.h 00:04:26.260 TEST_HEADER include/spdk/ioat.h 00:04:26.260 TEST_HEADER include/spdk/ioat_spec.h 00:04:26.260 TEST_HEADER include/spdk/iscsi_spec.h 00:04:26.260 TEST_HEADER include/spdk/json.h 00:04:26.260 TEST_HEADER include/spdk/jsonrpc.h 00:04:26.260 TEST_HEADER include/spdk/keyring.h 00:04:26.260 CC examples/util/zipf/zipf.o 00:04:26.260 TEST_HEADER include/spdk/keyring_module.h 00:04:26.260 TEST_HEADER include/spdk/likely.h 00:04:26.260 TEST_HEADER include/spdk/log.h 00:04:26.260 TEST_HEADER include/spdk/lvol.h 00:04:26.260 TEST_HEADER include/spdk/md5.h 00:04:26.260 TEST_HEADER include/spdk/memory.h 00:04:26.260 TEST_HEADER include/spdk/mmio.h 00:04:26.260 TEST_HEADER include/spdk/nbd.h 00:04:26.260 TEST_HEADER include/spdk/net.h 00:04:26.260 TEST_HEADER include/spdk/notify.h 00:04:26.260 TEST_HEADER include/spdk/nvme.h 00:04:26.260 TEST_HEADER include/spdk/nvme_intel.h 00:04:26.260 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:26.260 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:26.260 TEST_HEADER include/spdk/nvme_spec.h 00:04:26.261 TEST_HEADER include/spdk/nvme_zns.h 00:04:26.261 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:26.261 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:26.261 TEST_HEADER include/spdk/nvmf.h 00:04:26.261 CC test/dma/test_dma/test_dma.o 00:04:26.261 TEST_HEADER include/spdk/nvmf_spec.h 00:04:26.261 TEST_HEADER include/spdk/nvmf_transport.h 00:04:26.261 TEST_HEADER include/spdk/opal.h 00:04:26.261 TEST_HEADER include/spdk/opal_spec.h 00:04:26.261 CC test/app/bdev_svc/bdev_svc.o 00:04:26.261 TEST_HEADER include/spdk/pci_ids.h 00:04:26.261 TEST_HEADER include/spdk/pipe.h 00:04:26.261 TEST_HEADER include/spdk/queue.h 00:04:26.261 TEST_HEADER include/spdk/reduce.h 00:04:26.261 TEST_HEADER include/spdk/rpc.h 00:04:26.261 TEST_HEADER include/spdk/scheduler.h 00:04:26.261 TEST_HEADER include/spdk/scsi.h 00:04:26.519 TEST_HEADER include/spdk/scsi_spec.h 00:04:26.519 TEST_HEADER include/spdk/sock.h 00:04:26.519 LINK rpc_client_test 00:04:26.519 TEST_HEADER include/spdk/stdinc.h 00:04:26.519 CC test/env/mem_callbacks/mem_callbacks.o 00:04:26.519 TEST_HEADER include/spdk/string.h 00:04:26.519 TEST_HEADER include/spdk/thread.h 00:04:26.519 TEST_HEADER include/spdk/trace.h 00:04:26.519 TEST_HEADER include/spdk/trace_parser.h 00:04:26.519 TEST_HEADER include/spdk/tree.h 00:04:26.519 TEST_HEADER include/spdk/ublk.h 00:04:26.519 TEST_HEADER include/spdk/util.h 00:04:26.519 TEST_HEADER include/spdk/uuid.h 00:04:26.519 TEST_HEADER include/spdk/version.h 00:04:26.519 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:26.519 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:26.519 TEST_HEADER include/spdk/vhost.h 00:04:26.519 TEST_HEADER include/spdk/vmd.h 00:04:26.519 TEST_HEADER include/spdk/xor.h 00:04:26.519 TEST_HEADER include/spdk/zipf.h 00:04:26.519 CXX test/cpp_headers/accel.o 00:04:26.519 LINK poller_perf 00:04:26.519 LINK zipf 00:04:26.519 LINK interrupt_tgt 00:04:26.519 CXX test/cpp_headers/accel_module.o 00:04:26.519 LINK ioat_perf 00:04:26.519 LINK bdev_svc 00:04:26.519 CC test/env/vtophys/vtophys.o 00:04:26.519 CXX test/cpp_headers/assert.o 00:04:26.519 LINK spdk_trace 00:04:26.519 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:26.777 CC examples/ioat/verify/verify.o 00:04:26.777 CC test/env/memory/memory_ut.o 00:04:26.778 LINK vtophys 00:04:26.778 CC test/event/event_perf/event_perf.o 00:04:26.778 CXX test/cpp_headers/barrier.o 00:04:26.778 LINK env_dpdk_post_init 00:04:26.778 LINK mem_callbacks 00:04:26.778 CC app/trace_record/trace_record.o 00:04:26.778 CXX test/cpp_headers/base64.o 00:04:26.778 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:26.778 LINK test_dma 00:04:26.778 LINK event_perf 00:04:26.778 CXX test/cpp_headers/bdev.o 00:04:26.778 LINK verify 00:04:27.036 CC test/event/reactor/reactor.o 00:04:27.036 CC test/event/reactor_perf/reactor_perf.o 00:04:27.036 CXX test/cpp_headers/bdev_module.o 00:04:27.036 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.036 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.036 LINK spdk_trace_record 00:04:27.036 CC test/env/pci/pci_ut.o 00:04:27.036 LINK reactor_perf 00:04:27.036 LINK reactor 00:04:27.294 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:27.294 CXX test/cpp_headers/bdev_zone.o 00:04:27.294 CXX test/cpp_headers/bit_array.o 00:04:27.294 CC examples/thread/thread/thread_ex.o 00:04:27.294 LINK nvme_fuzz 00:04:27.294 CC test/event/app_repeat/app_repeat.o 00:04:27.294 CC app/nvmf_tgt/nvmf_main.o 00:04:27.294 CXX test/cpp_headers/bit_pool.o 00:04:27.294 CXX test/cpp_headers/blob_bdev.o 00:04:27.294 LINK pci_ut 00:04:27.294 LINK thread 00:04:27.294 CC test/event/scheduler/scheduler.o 00:04:27.553 LINK app_repeat 00:04:27.553 CXX test/cpp_headers/blobfs_bdev.o 00:04:27.553 LINK nvmf_tgt 00:04:27.553 LINK vhost_fuzz 00:04:27.553 CXX test/cpp_headers/blobfs.o 00:04:27.553 LINK scheduler 00:04:27.553 CC examples/sock/hello_world/hello_sock.o 00:04:27.553 CC app/iscsi_tgt/iscsi_tgt.o 00:04:27.553 CC test/app/histogram_perf/histogram_perf.o 00:04:27.813 CC test/app/jsoncat/jsoncat.o 00:04:27.813 CC app/spdk_tgt/spdk_tgt.o 00:04:27.813 CXX test/cpp_headers/blob.o 00:04:27.813 CC test/app/stub/stub.o 00:04:27.813 LINK memory_ut 00:04:27.813 LINK histogram_perf 00:04:27.813 LINK jsoncat 00:04:27.813 CXX test/cpp_headers/conf.o 00:04:27.813 LINK iscsi_tgt 00:04:27.813 CC test/accel/dif/dif.o 00:04:27.813 LINK spdk_tgt 00:04:27.813 LINK hello_sock 00:04:27.813 CXX test/cpp_headers/config.o 00:04:27.813 LINK stub 00:04:27.813 CXX test/cpp_headers/cpuset.o 00:04:27.813 CXX test/cpp_headers/crc16.o 00:04:28.074 CC app/spdk_lspci/spdk_lspci.o 00:04:28.074 CXX test/cpp_headers/crc32.o 00:04:28.074 CC app/spdk_nvme_perf/perf.o 00:04:28.074 CC app/spdk_nvme_identify/identify.o 00:04:28.074 CC test/blobfs/mkfs/mkfs.o 00:04:28.074 LINK spdk_lspci 00:04:28.074 CC examples/vmd/lsvmd/lsvmd.o 00:04:28.074 CC examples/idxd/perf/perf.o 00:04:28.335 CXX test/cpp_headers/crc64.o 00:04:28.335 CC test/lvol/esnap/esnap.o 00:04:28.335 CXX test/cpp_headers/dif.o 00:04:28.335 LINK lsvmd 00:04:28.335 LINK mkfs 00:04:28.335 CXX test/cpp_headers/dma.o 00:04:28.335 LINK iscsi_fuzz 00:04:28.335 CC examples/vmd/led/led.o 00:04:28.595 LINK idxd_perf 00:04:28.595 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:28.595 LINK dif 00:04:28.595 CXX test/cpp_headers/endian.o 00:04:28.595 LINK led 00:04:28.595 CC test/nvme/aer/aer.o 00:04:28.595 CC test/nvme/reset/reset.o 00:04:28.595 CXX test/cpp_headers/env_dpdk.o 00:04:28.596 CC test/nvme/sgl/sgl.o 00:04:28.856 LINK spdk_nvme_perf 00:04:28.856 LINK hello_fsdev 00:04:28.856 CC test/nvme/e2edp/nvme_dp.o 00:04:28.856 CC test/nvme/overhead/overhead.o 00:04:28.856 CXX test/cpp_headers/env.o 00:04:28.856 LINK aer 00:04:28.856 CXX test/cpp_headers/event.o 00:04:28.856 LINK reset 00:04:28.856 LINK spdk_nvme_identify 00:04:28.856 LINK sgl 00:04:28.856 CXX test/cpp_headers/fd_group.o 00:04:28.856 CC test/nvme/err_injection/err_injection.o 00:04:29.116 LINK nvme_dp 00:04:29.116 CC examples/accel/perf/accel_perf.o 00:04:29.116 CC app/spdk_nvme_discover/discovery_aer.o 00:04:29.116 CC app/spdk_top/spdk_top.o 00:04:29.116 LINK overhead 00:04:29.116 CC test/nvme/startup/startup.o 00:04:29.116 CXX test/cpp_headers/fd.o 00:04:29.116 CXX test/cpp_headers/file.o 00:04:29.116 LINK err_injection 00:04:29.116 CC test/nvme/reserve/reserve.o 00:04:29.116 LINK spdk_nvme_discover 00:04:29.116 LINK startup 00:04:29.375 CC test/nvme/simple_copy/simple_copy.o 00:04:29.375 CXX test/cpp_headers/fsdev.o 00:04:29.375 CC test/nvme/connect_stress/connect_stress.o 00:04:29.375 LINK reserve 00:04:29.375 CC test/bdev/bdevio/bdevio.o 00:04:29.375 CXX test/cpp_headers/fsdev_module.o 00:04:29.375 CC app/vhost/vhost.o 00:04:29.375 LINK simple_copy 00:04:29.375 CC test/nvme/boot_partition/boot_partition.o 00:04:29.375 CXX test/cpp_headers/ftl.o 00:04:29.375 LINK connect_stress 00:04:29.634 LINK accel_perf 00:04:29.634 LINK vhost 00:04:29.634 LINK boot_partition 00:04:29.634 CXX test/cpp_headers/fuse_dispatcher.o 00:04:29.634 CXX test/cpp_headers/gpt_spec.o 00:04:29.634 CC examples/blob/hello_world/hello_blob.o 00:04:29.634 CC test/nvme/compliance/nvme_compliance.o 00:04:29.634 CC test/nvme/fused_ordering/fused_ordering.o 00:04:29.634 LINK bdevio 00:04:29.634 CC app/spdk_dd/spdk_dd.o 00:04:29.904 LINK spdk_top 00:04:29.904 CXX test/cpp_headers/hexlify.o 00:04:29.904 LINK hello_blob 00:04:29.904 LINK fused_ordering 00:04:29.904 CC app/fio/nvme/fio_plugin.o 00:04:29.904 CC examples/nvme/hello_world/hello_world.o 00:04:29.904 CXX test/cpp_headers/histogram_data.o 00:04:29.904 CC examples/nvme/reconnect/reconnect.o 00:04:29.904 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:29.904 LINK nvme_compliance 00:04:29.904 CXX test/cpp_headers/idxd.o 00:04:30.167 CC examples/nvme/arbitration/arbitration.o 00:04:30.167 LINK spdk_dd 00:04:30.167 CC examples/blob/cli/blobcli.o 00:04:30.167 LINK hello_world 00:04:30.167 CXX test/cpp_headers/idxd_spec.o 00:04:30.167 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:30.167 CXX test/cpp_headers/init.o 00:04:30.167 LINK reconnect 00:04:30.167 CXX test/cpp_headers/ioat.o 00:04:30.426 CC test/nvme/fdp/fdp.o 00:04:30.426 LINK spdk_nvme 00:04:30.426 LINK doorbell_aers 00:04:30.426 LINK arbitration 00:04:30.426 CXX test/cpp_headers/ioat_spec.o 00:04:30.426 CC test/nvme/cuse/cuse.o 00:04:30.427 CXX test/cpp_headers/iscsi_spec.o 00:04:30.427 LINK nvme_manage 00:04:30.427 CC examples/bdev/hello_world/hello_bdev.o 00:04:30.427 CC app/fio/bdev/fio_plugin.o 00:04:30.427 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.427 CC examples/nvme/hotplug/hotplug.o 00:04:30.685 CXX test/cpp_headers/json.o 00:04:30.685 LINK fdp 00:04:30.685 LINK blobcli 00:04:30.685 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:30.685 CXX test/cpp_headers/jsonrpc.o 00:04:30.685 LINK hello_bdev 00:04:30.685 CC examples/nvme/abort/abort.o 00:04:30.685 LINK hotplug 00:04:30.685 LINK cmb_copy 00:04:30.685 CXX test/cpp_headers/keyring.o 00:04:30.685 CXX test/cpp_headers/keyring_module.o 00:04:30.685 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:30.943 CXX test/cpp_headers/likely.o 00:04:30.943 CXX test/cpp_headers/log.o 00:04:30.943 CXX test/cpp_headers/lvol.o 00:04:30.943 CXX test/cpp_headers/md5.o 00:04:30.943 CXX test/cpp_headers/memory.o 00:04:30.943 LINK pmr_persistence 00:04:30.943 LINK spdk_bdev 00:04:30.943 CXX test/cpp_headers/mmio.o 00:04:30.943 CXX test/cpp_headers/nbd.o 00:04:30.943 CXX test/cpp_headers/net.o 00:04:31.201 CXX test/cpp_headers/notify.o 00:04:31.201 CXX test/cpp_headers/nvme.o 00:04:31.201 CXX test/cpp_headers/nvme_intel.o 00:04:31.201 CXX test/cpp_headers/nvme_ocssd.o 00:04:31.201 LINK abort 00:04:31.201 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:31.201 CXX test/cpp_headers/nvme_spec.o 00:04:31.201 CXX test/cpp_headers/nvme_zns.o 00:04:31.201 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.201 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.201 CXX test/cpp_headers/nvmf.o 00:04:31.201 LINK bdevperf 00:04:31.201 CXX test/cpp_headers/nvmf_spec.o 00:04:31.201 CXX test/cpp_headers/nvmf_transport.o 00:04:31.460 CXX test/cpp_headers/opal.o 00:04:31.460 CXX test/cpp_headers/opal_spec.o 00:04:31.460 CXX test/cpp_headers/pci_ids.o 00:04:31.460 CXX test/cpp_headers/pipe.o 00:04:31.460 CXX test/cpp_headers/queue.o 00:04:31.460 CXX test/cpp_headers/reduce.o 00:04:31.460 CXX test/cpp_headers/rpc.o 00:04:31.460 CXX test/cpp_headers/scheduler.o 00:04:31.460 CXX test/cpp_headers/scsi.o 00:04:31.460 CXX test/cpp_headers/scsi_spec.o 00:04:31.460 CXX test/cpp_headers/sock.o 00:04:31.460 CXX test/cpp_headers/stdinc.o 00:04:31.460 CC examples/nvmf/nvmf/nvmf.o 00:04:31.460 CXX test/cpp_headers/string.o 00:04:31.460 CXX test/cpp_headers/thread.o 00:04:31.460 CXX test/cpp_headers/trace.o 00:04:31.460 CXX test/cpp_headers/trace_parser.o 00:04:31.719 LINK cuse 00:04:31.719 CXX test/cpp_headers/tree.o 00:04:31.719 CXX test/cpp_headers/ublk.o 00:04:31.719 CXX test/cpp_headers/util.o 00:04:31.719 CXX test/cpp_headers/uuid.o 00:04:31.719 CXX test/cpp_headers/version.o 00:04:31.719 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.719 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.719 CXX test/cpp_headers/vhost.o 00:04:31.719 CXX test/cpp_headers/vmd.o 00:04:31.719 CXX test/cpp_headers/xor.o 00:04:31.719 CXX test/cpp_headers/zipf.o 00:04:31.719 LINK nvmf 00:04:32.655 LINK esnap 00:04:32.914 00:04:32.914 real 0m59.572s 00:04:32.914 user 5m4.677s 00:04:32.914 sys 0m49.611s 00:04:32.914 03:04:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:32.914 03:04:36 make -- common/autotest_common.sh@10 -- $ set +x 00:04:32.914 ************************************ 00:04:32.914 END TEST make 00:04:32.914 ************************************ 00:04:32.914 03:04:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:32.914 03:04:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:32.914 03:04:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:32.914 03:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.914 03:04:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:32.914 03:04:36 -- pm/common@44 -- $ pid=5799 00:04:32.914 03:04:36 -- pm/common@50 -- $ kill -TERM 5799 00:04:32.914 03:04:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.914 03:04:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:32.914 03:04:36 -- pm/common@44 -- $ pid=5800 00:04:32.914 03:04:36 -- pm/common@50 -- $ kill -TERM 5800 00:04:32.914 03:04:36 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:32.914 03:04:36 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:32.914 03:04:36 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:32.914 03:04:36 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:32.914 03:04:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.915 03:04:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.915 03:04:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.915 03:04:36 -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.915 03:04:36 -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.915 03:04:36 -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.915 03:04:36 -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.915 03:04:36 -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.915 03:04:36 -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.915 03:04:36 -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.915 03:04:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.915 03:04:36 -- scripts/common.sh@344 -- # case "$op" in 00:04:32.915 03:04:36 -- scripts/common.sh@345 -- # : 1 00:04:32.915 03:04:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.915 03:04:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.915 03:04:36 -- scripts/common.sh@365 -- # decimal 1 00:04:32.915 03:04:36 -- scripts/common.sh@353 -- # local d=1 00:04:32.915 03:04:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.915 03:04:36 -- scripts/common.sh@355 -- # echo 1 00:04:32.915 03:04:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.915 03:04:36 -- scripts/common.sh@366 -- # decimal 2 00:04:32.915 03:04:36 -- scripts/common.sh@353 -- # local d=2 00:04:32.915 03:04:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.915 03:04:36 -- scripts/common.sh@355 -- # echo 2 00:04:32.915 03:04:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.915 03:04:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.915 03:04:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.915 03:04:36 -- scripts/common.sh@368 -- # return 0 00:04:32.915 03:04:36 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.915 03:04:36 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:32.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.915 --rc genhtml_branch_coverage=1 00:04:32.915 --rc genhtml_function_coverage=1 00:04:32.915 --rc genhtml_legend=1 00:04:32.915 --rc geninfo_all_blocks=1 00:04:32.915 --rc geninfo_unexecuted_blocks=1 00:04:32.915 00:04:32.915 ' 00:04:32.915 03:04:36 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:32.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.915 --rc genhtml_branch_coverage=1 00:04:32.915 --rc genhtml_function_coverage=1 00:04:32.915 --rc genhtml_legend=1 00:04:32.915 --rc geninfo_all_blocks=1 00:04:32.915 --rc geninfo_unexecuted_blocks=1 00:04:32.915 00:04:32.915 ' 00:04:32.915 03:04:36 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:32.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.915 --rc genhtml_branch_coverage=1 00:04:32.915 --rc genhtml_function_coverage=1 00:04:32.915 --rc genhtml_legend=1 00:04:32.915 --rc geninfo_all_blocks=1 00:04:32.915 --rc geninfo_unexecuted_blocks=1 00:04:32.915 00:04:32.915 ' 00:04:32.915 03:04:36 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:32.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.915 --rc genhtml_branch_coverage=1 00:04:32.915 --rc genhtml_function_coverage=1 00:04:32.915 --rc genhtml_legend=1 00:04:32.915 --rc geninfo_all_blocks=1 00:04:32.915 --rc geninfo_unexecuted_blocks=1 00:04:32.915 00:04:32.915 ' 00:04:32.915 03:04:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.915 03:04:36 -- nvmf/common.sh@7 -- # uname -s 00:04:32.915 03:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.915 03:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.915 03:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.915 03:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.915 03:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.915 03:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.915 03:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.915 03:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.915 03:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.915 03:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.915 03:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01d7ee47-a46b-4936-b643-475f931e6943 00:04:32.915 03:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=01d7ee47-a46b-4936-b643-475f931e6943 00:04:32.915 03:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.915 03:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.915 03:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.915 03:04:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.915 03:04:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.915 03:04:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.915 03:04:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.915 03:04:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.915 03:04:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.915 03:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.915 03:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.915 03:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.915 03:04:36 -- paths/export.sh@5 -- # export PATH 00:04:32.915 03:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.915 03:04:36 -- nvmf/common.sh@51 -- # : 0 00:04:32.915 03:04:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.915 03:04:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.915 03:04:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.915 03:04:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.915 03:04:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.915 03:04:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.915 03:04:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.915 03:04:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.915 03:04:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.915 03:04:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:32.915 03:04:36 -- spdk/autotest.sh@32 -- # uname -s 00:04:32.915 03:04:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:32.915 03:04:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:32.915 03:04:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.915 03:04:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:32.915 03:04:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.915 03:04:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:33.174 03:04:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:33.174 03:04:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:33.174 03:04:36 -- spdk/autotest.sh@48 -- # udevadm_pid=66915 00:04:33.174 03:04:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:33.174 03:04:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:33.174 03:04:36 -- pm/common@17 -- # local monitor 00:04:33.174 03:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.174 03:04:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.174 03:04:36 -- pm/common@25 -- # sleep 1 00:04:33.174 03:04:36 -- pm/common@21 -- # date +%s 00:04:33.174 03:04:36 -- pm/common@21 -- # date +%s 00:04:33.174 03:04:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731899076 00:04:33.174 03:04:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731899076 00:04:33.174 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731899076_collect-cpu-load.pm.log 00:04:33.174 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731899076_collect-vmstat.pm.log 00:04:34.110 03:04:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:34.110 03:04:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:34.110 03:04:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.110 03:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.110 03:04:37 -- spdk/autotest.sh@59 -- # create_test_list 00:04:34.110 03:04:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:34.110 03:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:34.110 03:04:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:34.110 03:04:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:34.110 03:04:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:34.110 03:04:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:34.110 03:04:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:34.110 03:04:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:34.110 03:04:37 -- common/autotest_common.sh@1455 -- # uname 00:04:34.110 03:04:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:34.110 03:04:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:34.110 03:04:37 -- common/autotest_common.sh@1475 -- # uname 00:04:34.110 03:04:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:34.110 03:04:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:34.110 03:04:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:34.110 lcov: LCOV version 1.15 00:04:34.110 03:04:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:48.988 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:48.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.894 03:05:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.894 03:05:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.894 03:05:06 -- common/autotest_common.sh@10 -- # set +x 00:05:03.894 03:05:06 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.894 03:05:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.152 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:04.152 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.152 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:04.152 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:04.152 03:05:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:04.152 03:05:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:04.152 03:05:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:04.152 03:05:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:04.152 03:05:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:05:04.152 03:05:07 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:05:04.152 03:05:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:04.152 03:05:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:04.152 03:05:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:04.153 03:05:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.153 03:05:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.153 03:05:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:04.153 03:05:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:04.153 03:05:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.153 No valid GPT data, bailing 00:05:04.153 03:05:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.411 03:05:07 -- scripts/common.sh@394 -- # pt= 00:05:04.411 03:05:07 -- scripts/common.sh@395 -- # return 1 00:05:04.411 03:05:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.411 1+0 records in 00:05:04.411 1+0 records out 00:05:04.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231208 s, 45.4 MB/s 00:05:04.411 03:05:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.411 03:05:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.411 03:05:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:04.411 03:05:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:04.411 03:05:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:04.411 No valid GPT data, bailing 00:05:04.411 03:05:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.411 03:05:07 -- scripts/common.sh@394 -- # pt= 00:05:04.411 03:05:07 -- scripts/common.sh@395 -- # return 1 00:05:04.411 03:05:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:04.411 1+0 records in 00:05:04.411 1+0 records out 00:05:04.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548383 s, 191 MB/s 00:05:04.411 03:05:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.411 03:05:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.411 03:05:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:04.411 03:05:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:04.411 03:05:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:04.411 No valid GPT data, bailing 00:05:04.411 03:05:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:04.411 03:05:07 -- scripts/common.sh@394 -- # pt= 00:05:04.411 03:05:07 -- scripts/common.sh@395 -- # return 1 00:05:04.411 03:05:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:04.411 1+0 records in 00:05:04.411 1+0 records out 00:05:04.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554226 s, 189 MB/s 00:05:04.411 03:05:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.411 03:05:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.411 03:05:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:04.411 03:05:07 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:04.411 03:05:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:04.411 No valid GPT data, bailing 00:05:04.411 03:05:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:04.669 03:05:07 -- scripts/common.sh@394 -- # pt= 00:05:04.669 03:05:07 -- scripts/common.sh@395 -- # return 1 00:05:04.669 03:05:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:04.669 1+0 records in 00:05:04.669 1+0 records out 00:05:04.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538758 s, 195 MB/s 00:05:04.669 03:05:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.669 03:05:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.669 03:05:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:05:04.669 03:05:08 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:05:04.669 03:05:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:05:04.669 No valid GPT data, bailing 00:05:04.669 03:05:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:05:04.669 03:05:08 -- scripts/common.sh@394 -- # pt= 00:05:04.669 03:05:08 -- scripts/common.sh@395 -- # return 1 00:05:04.669 03:05:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:05:04.669 1+0 records in 00:05:04.669 1+0 records out 00:05:04.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551626 s, 190 MB/s 00:05:04.669 03:05:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.669 03:05:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:04.669 03:05:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:05:04.669 03:05:08 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:05:04.669 03:05:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:05:04.669 No valid GPT data, bailing 00:05:04.669 03:05:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:05:04.669 03:05:08 -- scripts/common.sh@394 -- # pt= 00:05:04.669 03:05:08 -- scripts/common.sh@395 -- # return 1 00:05:04.669 03:05:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:05:04.669 1+0 records in 00:05:04.669 1+0 records out 00:05:04.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452345 s, 232 MB/s 00:05:04.669 03:05:08 -- spdk/autotest.sh@105 -- # sync 00:05:04.669 03:05:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.669 03:05:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.669 03:05:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:06.621 03:05:09 -- spdk/autotest.sh@111 -- # uname -s 00:05:06.621 03:05:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:06.621 03:05:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:06.621 03:05:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:06.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.188 Hugepages 00:05:07.188 node hugesize free / total 00:05:07.188 node0 1048576kB 0 / 0 00:05:07.188 node0 2048kB 0 / 0 00:05:07.188 00:05:07.188 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.188 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:07.188 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:07.188 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:07.188 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:05:07.446 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:07.446 03:05:10 -- spdk/autotest.sh@117 -- # uname -s 00:05:07.446 03:05:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:07.446 03:05:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:07.446 03:05:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.269 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.269 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.269 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.269 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.527 03:05:11 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:09.459 03:05:12 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:09.459 03:05:12 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:09.459 03:05:12 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.459 03:05:12 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:09.459 03:05:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:09.459 03:05:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:09.459 03:05:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.459 03:05:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.459 03:05:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:09.459 03:05:12 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:09.459 03:05:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:09.459 03:05:12 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.975 Waiting for block devices as requested 00:05:09.975 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.975 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.975 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.233 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.507 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:15.507 03:05:18 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:15.507 03:05:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:15.507 03:05:18 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.507 03:05:18 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1541 -- # continue 00:05:15.507 03:05:18 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.507 03:05:18 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1541 -- # continue 00:05:15.507 03:05:18 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.507 03:05:18 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1541 -- # continue 00:05:15.507 03:05:18 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:15.507 03:05:18 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:15.507 03:05:18 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:15.507 03:05:18 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:15.507 03:05:18 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:15.507 03:05:18 -- common/autotest_common.sh@1541 -- # continue 00:05:15.507 03:05:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:15.507 03:05:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:15.507 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.507 03:05:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:15.508 03:05:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.508 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.508 03:05:18 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.341 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.341 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.341 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.341 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.341 03:05:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:16.341 03:05:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.341 03:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:16.341 03:05:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:16.341 03:05:19 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:16.341 03:05:19 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:16.341 03:05:19 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:16.341 03:05:19 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:16.341 03:05:19 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:16.341 03:05:19 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:16.341 03:05:19 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:16.341 03:05:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:16.341 03:05:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:16.341 03:05:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.341 03:05:19 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:16.341 03:05:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:16.341 03:05:19 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:16.341 03:05:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.341 03:05:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.341 03:05:19 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.341 03:05:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.341 03:05:19 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.341 03:05:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.341 03:05:19 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.341 03:05:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:16.341 03:05:19 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:16.341 03:05:19 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:16.341 03:05:19 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:16.341 03:05:19 -- common/autotest_common.sh@1570 -- # return 0 00:05:16.341 03:05:19 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:16.341 03:05:19 -- common/autotest_common.sh@1578 -- # return 0 00:05:16.341 03:05:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:16.341 03:05:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:16.341 03:05:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:16.341 03:05:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:16.341 03:05:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:16.341 03:05:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.341 03:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:16.341 03:05:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:16.341 03:05:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:16.342 03:05:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.342 03:05:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.342 03:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:16.342 ************************************ 00:05:16.342 START TEST env 00:05:16.342 ************************************ 00:05:16.342 03:05:19 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:16.342 * Looking for test storage... 00:05:16.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:16.342 03:05:19 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.342 03:05:19 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.342 03:05:19 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.603 03:05:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.603 03:05:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.603 03:05:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.603 03:05:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.603 03:05:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.603 03:05:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.603 03:05:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.603 03:05:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.603 03:05:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.603 03:05:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.603 03:05:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.603 03:05:19 env -- scripts/common.sh@344 -- # case "$op" in 00:05:16.603 03:05:19 env -- scripts/common.sh@345 -- # : 1 00:05:16.603 03:05:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.603 03:05:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.603 03:05:19 env -- scripts/common.sh@365 -- # decimal 1 00:05:16.603 03:05:19 env -- scripts/common.sh@353 -- # local d=1 00:05:16.603 03:05:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.603 03:05:19 env -- scripts/common.sh@355 -- # echo 1 00:05:16.603 03:05:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.603 03:05:19 env -- scripts/common.sh@366 -- # decimal 2 00:05:16.603 03:05:19 env -- scripts/common.sh@353 -- # local d=2 00:05:16.603 03:05:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.603 03:05:19 env -- scripts/common.sh@355 -- # echo 2 00:05:16.603 03:05:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.603 03:05:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.603 03:05:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.603 03:05:19 env -- scripts/common.sh@368 -- # return 0 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.603 --rc genhtml_branch_coverage=1 00:05:16.603 --rc genhtml_function_coverage=1 00:05:16.603 --rc genhtml_legend=1 00:05:16.603 --rc geninfo_all_blocks=1 00:05:16.603 --rc geninfo_unexecuted_blocks=1 00:05:16.603 00:05:16.603 ' 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.603 --rc genhtml_branch_coverage=1 00:05:16.603 --rc genhtml_function_coverage=1 00:05:16.603 --rc genhtml_legend=1 00:05:16.603 --rc geninfo_all_blocks=1 00:05:16.603 --rc geninfo_unexecuted_blocks=1 00:05:16.603 00:05:16.603 ' 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.603 --rc genhtml_branch_coverage=1 00:05:16.603 --rc genhtml_function_coverage=1 00:05:16.603 --rc genhtml_legend=1 00:05:16.603 --rc geninfo_all_blocks=1 00:05:16.603 --rc geninfo_unexecuted_blocks=1 00:05:16.603 00:05:16.603 ' 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.603 --rc genhtml_branch_coverage=1 00:05:16.603 --rc genhtml_function_coverage=1 00:05:16.603 --rc genhtml_legend=1 00:05:16.603 --rc geninfo_all_blocks=1 00:05:16.603 --rc geninfo_unexecuted_blocks=1 00:05:16.603 00:05:16.603 ' 00:05:16.603 03:05:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.603 03:05:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.603 03:05:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.603 ************************************ 00:05:16.603 START TEST env_memory 00:05:16.603 ************************************ 00:05:16.603 03:05:19 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:16.603 00:05:16.603 00:05:16.603 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.603 http://cunit.sourceforge.net/ 00:05:16.603 00:05:16.603 00:05:16.603 Suite: memory 00:05:16.603 Test: alloc and free memory map ...[2024-11-18 03:05:20.040959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.603 passed 00:05:16.603 Test: mem map translation ...[2024-11-18 03:05:20.079856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.603 [2024-11-18 03:05:20.079938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.603 [2024-11-18 03:05:20.080016] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.603 [2024-11-18 03:05:20.080035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.603 passed 00:05:16.603 Test: mem map registration ...[2024-11-18 03:05:20.148253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:16.603 [2024-11-18 03:05:20.148310] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:16.603 passed 00:05:16.881 Test: mem map adjacent registrations ...passed 00:05:16.881 00:05:16.881 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.881 suites 1 1 n/a 0 0 00:05:16.881 tests 4 4 4 0 0 00:05:16.881 asserts 152 152 152 0 n/a 00:05:16.881 00:05:16.881 Elapsed time = 0.233 seconds 00:05:16.881 00:05:16.881 real 0m0.270s 00:05:16.881 user 0m0.239s 00:05:16.881 sys 0m0.024s 00:05:16.881 03:05:20 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.881 ************************************ 00:05:16.881 END TEST env_memory 00:05:16.881 ************************************ 00:05:16.881 03:05:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:16.881 03:05:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.881 03:05:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.881 03:05:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.881 03:05:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.881 ************************************ 00:05:16.881 START TEST env_vtophys 00:05:16.881 ************************************ 00:05:16.881 03:05:20 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:16.881 EAL: lib.eal log level changed from notice to debug 00:05:16.881 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 1 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 2 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 3 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 4 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 5 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 6 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 7 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 8 as core 0 on socket 0 00:05:16.881 EAL: Detected lcore 9 as core 0 on socket 0 00:05:16.881 EAL: Maximum logical cores by configuration: 128 00:05:16.881 EAL: Detected CPU lcores: 10 00:05:16.881 EAL: Detected NUMA nodes: 1 00:05:16.881 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:16.881 EAL: Detected shared linkage of DPDK 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:16.881 EAL: Registered [vdev] bus. 00:05:16.881 EAL: bus.vdev log level changed from disabled to notice 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:16.881 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:16.881 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:16.881 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:16.881 EAL: No shared files mode enabled, IPC will be disabled 00:05:16.881 EAL: No shared files mode enabled, IPC is disabled 00:05:16.881 EAL: Selected IOVA mode 'PA' 00:05:16.881 EAL: Probing VFIO support... 00:05:16.881 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.881 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:16.881 EAL: Ask a virtual area of 0x2e000 bytes 00:05:16.881 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:16.881 EAL: Setting up physically contiguous memory... 00:05:16.881 EAL: Setting maximum number of open files to 524288 00:05:16.881 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:16.881 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:16.881 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.881 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:16.881 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.881 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.881 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:16.881 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:16.881 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.881 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:16.881 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.881 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.881 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:16.881 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:16.881 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.881 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:16.881 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.881 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.881 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:16.881 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:16.881 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.881 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:16.881 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.881 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.881 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:16.881 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:16.881 EAL: Hugepages will be freed exactly as allocated. 00:05:16.881 EAL: No shared files mode enabled, IPC is disabled 00:05:16.881 EAL: No shared files mode enabled, IPC is disabled 00:05:16.881 EAL: TSC frequency is ~2600000 KHz 00:05:16.881 EAL: Main lcore 0 is ready (tid=7f8ff5a18a40;cpuset=[0]) 00:05:16.881 EAL: Trying to obtain current memory policy. 00:05:16.881 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.881 EAL: Restoring previous memory policy: 0 00:05:16.881 EAL: request: mp_malloc_sync 00:05:16.881 EAL: No shared files mode enabled, IPC is disabled 00:05:16.881 EAL: Heap on socket 0 was expanded by 2MB 00:05:16.881 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:16.881 EAL: No shared files mode enabled, IPC is disabled 00:05:16.881 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:16.881 EAL: Mem event callback 'spdk:(nil)' registered 00:05:16.881 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:17.142 00:05:17.142 00:05:17.142 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.142 http://cunit.sourceforge.net/ 00:05:17.142 00:05:17.142 00:05:17.142 Suite: components_suite 00:05:17.405 Test: vtophys_malloc_test ...passed 00:05:17.405 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 4MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was shrunk by 4MB 00:05:17.405 EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 6MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was shrunk by 6MB 00:05:17.405 EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 10MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was shrunk by 10MB 00:05:17.405 EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 18MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was shrunk by 18MB 00:05:17.405 EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 34MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was shrunk by 34MB 00:05:17.405 EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 66MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was shrunk by 66MB 00:05:17.405 EAL: Trying to obtain current memory policy. 00:05:17.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.405 EAL: Restoring previous memory policy: 4 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.405 EAL: request: mp_malloc_sync 00:05:17.405 EAL: No shared files mode enabled, IPC is disabled 00:05:17.405 EAL: Heap on socket 0 was expanded by 130MB 00:05:17.405 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.406 EAL: request: mp_malloc_sync 00:05:17.406 EAL: No shared files mode enabled, IPC is disabled 00:05:17.406 EAL: Heap on socket 0 was shrunk by 130MB 00:05:17.406 EAL: Trying to obtain current memory policy. 00:05:17.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.406 EAL: Restoring previous memory policy: 4 00:05:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.406 EAL: request: mp_malloc_sync 00:05:17.406 EAL: No shared files mode enabled, IPC is disabled 00:05:17.406 EAL: Heap on socket 0 was expanded by 258MB 00:05:17.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.406 EAL: request: mp_malloc_sync 00:05:17.406 EAL: No shared files mode enabled, IPC is disabled 00:05:17.406 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.406 EAL: Trying to obtain current memory policy. 00:05:17.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.667 EAL: Restoring previous memory policy: 4 00:05:17.667 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.667 EAL: request: mp_malloc_sync 00:05:17.667 EAL: No shared files mode enabled, IPC is disabled 00:05:17.667 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.667 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.667 EAL: request: mp_malloc_sync 00:05:17.667 EAL: No shared files mode enabled, IPC is disabled 00:05:17.667 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.667 EAL: Trying to obtain current memory policy. 00:05:17.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.929 EAL: Restoring previous memory policy: 4 00:05:17.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.929 EAL: request: mp_malloc_sync 00:05:17.929 EAL: No shared files mode enabled, IPC is disabled 00:05:17.929 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.189 passed 00:05:18.189 00:05:18.189 EAL: request: mp_malloc_sync 00:05:18.189 EAL: No shared files mode enabled, IPC is disabled 00:05:18.189 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:18.189 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.189 suites 1 1 n/a 0 0 00:05:18.189 tests 2 2 2 0 0 00:05:18.189 asserts 5358 5358 5358 0 n/a 00:05:18.189 00:05:18.189 Elapsed time = 1.068 seconds 00:05:18.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.189 EAL: request: mp_malloc_sync 00:05:18.189 EAL: No shared files mode enabled, IPC is disabled 00:05:18.189 EAL: Heap on socket 0 was shrunk by 2MB 00:05:18.189 EAL: No shared files mode enabled, IPC is disabled 00:05:18.189 EAL: No shared files mode enabled, IPC is disabled 00:05:18.189 EAL: No shared files mode enabled, IPC is disabled 00:05:18.189 00:05:18.189 real 0m1.290s 00:05:18.189 user 0m0.509s 00:05:18.189 sys 0m0.648s 00:05:18.189 03:05:21 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.189 ************************************ 00:05:18.189 END TEST env_vtophys 00:05:18.189 ************************************ 00:05:18.189 03:05:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:18.189 03:05:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:18.189 03:05:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.189 03:05:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.189 03:05:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.189 ************************************ 00:05:18.189 START TEST env_pci 00:05:18.189 ************************************ 00:05:18.189 03:05:21 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:18.189 00:05:18.189 00:05:18.189 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.189 http://cunit.sourceforge.net/ 00:05:18.189 00:05:18.189 00:05:18.189 Suite: pci 00:05:18.189 Test: pci_hook ...[2024-11-18 03:05:21.667072] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69629 has claimed it 00:05:18.189 passed 00:05:18.189 00:05:18.189 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.189 suites 1 1 n/a 0 0 00:05:18.189 tests 1 1 1 0 0 00:05:18.190 asserts 25 25 25 0 n/a 00:05:18.190 00:05:18.190 Elapsed time = 0.004 seconds 00:05:18.190 EAL: Cannot find device (10000:00:01.0) 00:05:18.190 EAL: Failed to attach device on primary process 00:05:18.190 00:05:18.190 real 0m0.052s 00:05:18.190 user 0m0.021s 00:05:18.190 sys 0m0.030s 00:05:18.190 03:05:21 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.190 03:05:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:18.190 ************************************ 00:05:18.190 END TEST env_pci 00:05:18.190 ************************************ 00:05:18.190 03:05:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:18.190 03:05:21 env -- env/env.sh@15 -- # uname 00:05:18.190 03:05:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:18.190 03:05:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:18.190 03:05:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.190 03:05:21 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:18.190 03:05:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.190 03:05:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.448 ************************************ 00:05:18.448 START TEST env_dpdk_post_init 00:05:18.448 ************************************ 00:05:18.448 03:05:21 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.448 EAL: Detected CPU lcores: 10 00:05:18.448 EAL: Detected NUMA nodes: 1 00:05:18.448 EAL: Detected shared linkage of DPDK 00:05:18.448 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.448 EAL: Selected IOVA mode 'PA' 00:05:18.448 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.448 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:18.448 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:18.448 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:18.448 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:18.448 Starting DPDK initialization... 00:05:18.448 Starting SPDK post initialization... 00:05:18.448 SPDK NVMe probe 00:05:18.448 Attaching to 0000:00:10.0 00:05:18.448 Attaching to 0000:00:11.0 00:05:18.448 Attaching to 0000:00:12.0 00:05:18.448 Attaching to 0000:00:13.0 00:05:18.448 Attached to 0000:00:13.0 00:05:18.448 Attached to 0000:00:10.0 00:05:18.448 Attached to 0000:00:11.0 00:05:18.448 Attached to 0000:00:12.0 00:05:18.448 Cleaning up... 00:05:18.448 00:05:18.448 real 0m0.219s 00:05:18.448 user 0m0.062s 00:05:18.448 sys 0m0.058s 00:05:18.449 03:05:21 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.449 ************************************ 00:05:18.449 END TEST env_dpdk_post_init 00:05:18.449 ************************************ 00:05:18.449 03:05:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.707 03:05:22 env -- env/env.sh@26 -- # uname 00:05:18.707 03:05:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.707 03:05:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.707 03:05:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.707 03:05:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.707 03:05:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.707 ************************************ 00:05:18.707 START TEST env_mem_callbacks 00:05:18.707 ************************************ 00:05:18.707 03:05:22 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.707 EAL: Detected CPU lcores: 10 00:05:18.707 EAL: Detected NUMA nodes: 1 00:05:18.707 EAL: Detected shared linkage of DPDK 00:05:18.707 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.707 EAL: Selected IOVA mode 'PA' 00:05:18.707 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.707 00:05:18.707 00:05:18.707 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.707 http://cunit.sourceforge.net/ 00:05:18.707 00:05:18.707 00:05:18.707 Suite: memory 00:05:18.707 Test: test ... 00:05:18.707 register 0x200000200000 2097152 00:05:18.707 malloc 3145728 00:05:18.707 register 0x200000400000 4194304 00:05:18.707 buf 0x200000500000 len 3145728 PASSED 00:05:18.707 malloc 64 00:05:18.707 buf 0x2000004fff40 len 64 PASSED 00:05:18.707 malloc 4194304 00:05:18.707 register 0x200000800000 6291456 00:05:18.707 buf 0x200000a00000 len 4194304 PASSED 00:05:18.707 free 0x200000500000 3145728 00:05:18.707 free 0x2000004fff40 64 00:05:18.707 unregister 0x200000400000 4194304 PASSED 00:05:18.707 free 0x200000a00000 4194304 00:05:18.707 unregister 0x200000800000 6291456 PASSED 00:05:18.707 malloc 8388608 00:05:18.707 register 0x200000400000 10485760 00:05:18.707 buf 0x200000600000 len 8388608 PASSED 00:05:18.707 free 0x200000600000 8388608 00:05:18.707 unregister 0x200000400000 10485760 PASSED 00:05:18.707 passed 00:05:18.707 00:05:18.707 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.707 suites 1 1 n/a 0 0 00:05:18.707 tests 1 1 1 0 0 00:05:18.707 asserts 15 15 15 0 n/a 00:05:18.707 00:05:18.707 Elapsed time = 0.010 seconds 00:05:18.707 00:05:18.707 real 0m0.164s 00:05:18.707 user 0m0.024s 00:05:18.707 sys 0m0.038s 00:05:18.707 03:05:22 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.707 ************************************ 00:05:18.707 END TEST env_mem_callbacks 00:05:18.707 ************************************ 00:05:18.707 03:05:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:18.707 00:05:18.707 real 0m2.429s 00:05:18.707 user 0m1.015s 00:05:18.707 sys 0m1.002s 00:05:18.707 03:05:22 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.707 ************************************ 00:05:18.707 END TEST env 00:05:18.707 ************************************ 00:05:18.707 03:05:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.966 03:05:22 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:18.966 03:05:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.966 03:05:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.966 03:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:18.966 ************************************ 00:05:18.966 START TEST rpc 00:05:18.966 ************************************ 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:18.966 * Looking for test storage... 00:05:18.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.966 03:05:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.966 03:05:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.966 03:05:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.966 03:05:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.966 03:05:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.966 03:05:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.966 03:05:22 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.966 03:05:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.966 03:05:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.966 03:05:22 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.966 03:05:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.966 03:05:22 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.966 03:05:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.966 03:05:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.966 03:05:22 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.966 --rc genhtml_branch_coverage=1 00:05:18.966 --rc genhtml_function_coverage=1 00:05:18.966 --rc genhtml_legend=1 00:05:18.966 --rc geninfo_all_blocks=1 00:05:18.966 --rc geninfo_unexecuted_blocks=1 00:05:18.966 00:05:18.966 ' 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.966 --rc genhtml_branch_coverage=1 00:05:18.966 --rc genhtml_function_coverage=1 00:05:18.966 --rc genhtml_legend=1 00:05:18.966 --rc geninfo_all_blocks=1 00:05:18.966 --rc geninfo_unexecuted_blocks=1 00:05:18.966 00:05:18.966 ' 00:05:18.966 03:05:22 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.966 --rc genhtml_branch_coverage=1 00:05:18.966 --rc genhtml_function_coverage=1 00:05:18.966 --rc genhtml_legend=1 00:05:18.966 --rc geninfo_all_blocks=1 00:05:18.967 --rc geninfo_unexecuted_blocks=1 00:05:18.967 00:05:18.967 ' 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.967 --rc genhtml_branch_coverage=1 00:05:18.967 --rc genhtml_function_coverage=1 00:05:18.967 --rc genhtml_legend=1 00:05:18.967 --rc geninfo_all_blocks=1 00:05:18.967 --rc geninfo_unexecuted_blocks=1 00:05:18.967 00:05:18.967 ' 00:05:18.967 03:05:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69750 00:05:18.967 03:05:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.967 03:05:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:18.967 03:05:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69750 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@831 -- # '[' -z 69750 ']' 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.967 03:05:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.967 [2024-11-18 03:05:22.526717] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:18.967 [2024-11-18 03:05:22.526838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69750 ] 00:05:19.228 [2024-11-18 03:05:22.672622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.228 [2024-11-18 03:05:22.706261] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:19.228 [2024-11-18 03:05:22.706324] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69750' to capture a snapshot of events at runtime. 00:05:19.228 [2024-11-18 03:05:22.706340] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:19.228 [2024-11-18 03:05:22.706348] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:19.228 [2024-11-18 03:05:22.706360] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69750 for offline analysis/debug. 00:05:19.228 [2024-11-18 03:05:22.706391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.796 03:05:23 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.796 03:05:23 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:19.796 03:05:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.796 03:05:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:19.796 03:05:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.796 03:05:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.796 03:05:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.796 03:05:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.796 03:05:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.796 ************************************ 00:05:19.796 START TEST rpc_integrity 00:05:19.796 ************************************ 00:05:19.796 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:19.796 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.796 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.796 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.055 { 00:05:20.055 "name": "Malloc0", 00:05:20.055 "aliases": [ 00:05:20.055 "113da244-5fa5-4d4b-971c-d19ec5439296" 00:05:20.055 ], 00:05:20.055 "product_name": "Malloc disk", 00:05:20.055 "block_size": 512, 00:05:20.055 "num_blocks": 16384, 00:05:20.055 "uuid": "113da244-5fa5-4d4b-971c-d19ec5439296", 00:05:20.055 "assigned_rate_limits": { 00:05:20.055 "rw_ios_per_sec": 0, 00:05:20.055 "rw_mbytes_per_sec": 0, 00:05:20.055 "r_mbytes_per_sec": 0, 00:05:20.055 "w_mbytes_per_sec": 0 00:05:20.055 }, 00:05:20.055 "claimed": false, 00:05:20.055 "zoned": false, 00:05:20.055 "supported_io_types": { 00:05:20.055 "read": true, 00:05:20.055 "write": true, 00:05:20.055 "unmap": true, 00:05:20.055 "flush": true, 00:05:20.055 "reset": true, 00:05:20.055 "nvme_admin": false, 00:05:20.055 "nvme_io": false, 00:05:20.055 "nvme_io_md": false, 00:05:20.055 "write_zeroes": true, 00:05:20.055 "zcopy": true, 00:05:20.055 "get_zone_info": false, 00:05:20.055 "zone_management": false, 00:05:20.055 "zone_append": false, 00:05:20.055 "compare": false, 00:05:20.055 "compare_and_write": false, 00:05:20.055 "abort": true, 00:05:20.055 "seek_hole": false, 00:05:20.055 "seek_data": false, 00:05:20.055 "copy": true, 00:05:20.055 "nvme_iov_md": false 00:05:20.055 }, 00:05:20.055 "memory_domains": [ 00:05:20.055 { 00:05:20.055 "dma_device_id": "system", 00:05:20.055 "dma_device_type": 1 00:05:20.055 }, 00:05:20.055 { 00:05:20.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.055 "dma_device_type": 2 00:05:20.055 } 00:05:20.055 ], 00:05:20.055 "driver_specific": {} 00:05:20.055 } 00:05:20.055 ]' 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.055 [2024-11-18 03:05:23.470756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:20.055 [2024-11-18 03:05:23.470828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.055 [2024-11-18 03:05:23.470855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:20.055 [2024-11-18 03:05:23.470864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.055 [2024-11-18 03:05:23.473083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.055 [2024-11-18 03:05:23.473119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.055 Passthru0 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.055 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.055 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.055 { 00:05:20.055 "name": "Malloc0", 00:05:20.055 "aliases": [ 00:05:20.055 "113da244-5fa5-4d4b-971c-d19ec5439296" 00:05:20.055 ], 00:05:20.055 "product_name": "Malloc disk", 00:05:20.055 "block_size": 512, 00:05:20.055 "num_blocks": 16384, 00:05:20.055 "uuid": "113da244-5fa5-4d4b-971c-d19ec5439296", 00:05:20.055 "assigned_rate_limits": { 00:05:20.055 "rw_ios_per_sec": 0, 00:05:20.055 "rw_mbytes_per_sec": 0, 00:05:20.055 "r_mbytes_per_sec": 0, 00:05:20.055 "w_mbytes_per_sec": 0 00:05:20.055 }, 00:05:20.055 "claimed": true, 00:05:20.055 "claim_type": "exclusive_write", 00:05:20.055 "zoned": false, 00:05:20.055 "supported_io_types": { 00:05:20.055 "read": true, 00:05:20.055 "write": true, 00:05:20.055 "unmap": true, 00:05:20.055 "flush": true, 00:05:20.055 "reset": true, 00:05:20.055 "nvme_admin": false, 00:05:20.055 "nvme_io": false, 00:05:20.055 "nvme_io_md": false, 00:05:20.055 "write_zeroes": true, 00:05:20.055 "zcopy": true, 00:05:20.055 "get_zone_info": false, 00:05:20.055 "zone_management": false, 00:05:20.055 "zone_append": false, 00:05:20.055 "compare": false, 00:05:20.055 "compare_and_write": false, 00:05:20.055 "abort": true, 00:05:20.055 "seek_hole": false, 00:05:20.055 "seek_data": false, 00:05:20.055 "copy": true, 00:05:20.055 "nvme_iov_md": false 00:05:20.055 }, 00:05:20.055 "memory_domains": [ 00:05:20.055 { 00:05:20.055 "dma_device_id": "system", 00:05:20.055 "dma_device_type": 1 00:05:20.055 }, 00:05:20.055 { 00:05:20.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.055 "dma_device_type": 2 00:05:20.055 } 00:05:20.055 ], 00:05:20.055 "driver_specific": {} 00:05:20.055 }, 00:05:20.055 { 00:05:20.055 "name": "Passthru0", 00:05:20.055 "aliases": [ 00:05:20.055 "79ee67a3-cb50-59d0-b61f-55f8c085abf5" 00:05:20.055 ], 00:05:20.056 "product_name": "passthru", 00:05:20.056 "block_size": 512, 00:05:20.056 "num_blocks": 16384, 00:05:20.056 "uuid": "79ee67a3-cb50-59d0-b61f-55f8c085abf5", 00:05:20.056 "assigned_rate_limits": { 00:05:20.056 "rw_ios_per_sec": 0, 00:05:20.056 "rw_mbytes_per_sec": 0, 00:05:20.056 "r_mbytes_per_sec": 0, 00:05:20.056 "w_mbytes_per_sec": 0 00:05:20.056 }, 00:05:20.056 "claimed": false, 00:05:20.056 "zoned": false, 00:05:20.056 "supported_io_types": { 00:05:20.056 "read": true, 00:05:20.056 "write": true, 00:05:20.056 "unmap": true, 00:05:20.056 "flush": true, 00:05:20.056 "reset": true, 00:05:20.056 "nvme_admin": false, 00:05:20.056 "nvme_io": false, 00:05:20.056 "nvme_io_md": false, 00:05:20.056 "write_zeroes": true, 00:05:20.056 "zcopy": true, 00:05:20.056 "get_zone_info": false, 00:05:20.056 "zone_management": false, 00:05:20.056 "zone_append": false, 00:05:20.056 "compare": false, 00:05:20.056 "compare_and_write": false, 00:05:20.056 "abort": true, 00:05:20.056 "seek_hole": false, 00:05:20.056 "seek_data": false, 00:05:20.056 "copy": true, 00:05:20.056 "nvme_iov_md": false 00:05:20.056 }, 00:05:20.056 "memory_domains": [ 00:05:20.056 { 00:05:20.056 "dma_device_id": "system", 00:05:20.056 "dma_device_type": 1 00:05:20.056 }, 00:05:20.056 { 00:05:20.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.056 "dma_device_type": 2 00:05:20.056 } 00:05:20.056 ], 00:05:20.056 "driver_specific": { 00:05:20.056 "passthru": { 00:05:20.056 "name": "Passthru0", 00:05:20.056 "base_bdev_name": "Malloc0" 00:05:20.056 } 00:05:20.056 } 00:05:20.056 } 00:05:20.056 ]' 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.056 03:05:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.056 00:05:20.056 real 0m0.225s 00:05:20.056 user 0m0.119s 00:05:20.056 sys 0m0.032s 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.056 ************************************ 00:05:20.056 END TEST rpc_integrity 00:05:20.056 ************************************ 00:05:20.056 03:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.316 03:05:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:20.316 03:05:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.316 03:05:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.316 03:05:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.316 ************************************ 00:05:20.316 START TEST rpc_plugins 00:05:20.316 ************************************ 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:20.316 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.316 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:20.316 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.316 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.316 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.316 { 00:05:20.316 "name": "Malloc1", 00:05:20.316 "aliases": [ 00:05:20.316 "748c74d1-ae60-4b3f-9302-713b9c03003f" 00:05:20.316 ], 00:05:20.316 "product_name": "Malloc disk", 00:05:20.316 "block_size": 4096, 00:05:20.316 "num_blocks": 256, 00:05:20.316 "uuid": "748c74d1-ae60-4b3f-9302-713b9c03003f", 00:05:20.316 "assigned_rate_limits": { 00:05:20.316 "rw_ios_per_sec": 0, 00:05:20.317 "rw_mbytes_per_sec": 0, 00:05:20.317 "r_mbytes_per_sec": 0, 00:05:20.317 "w_mbytes_per_sec": 0 00:05:20.317 }, 00:05:20.317 "claimed": false, 00:05:20.317 "zoned": false, 00:05:20.317 "supported_io_types": { 00:05:20.317 "read": true, 00:05:20.317 "write": true, 00:05:20.317 "unmap": true, 00:05:20.317 "flush": true, 00:05:20.317 "reset": true, 00:05:20.317 "nvme_admin": false, 00:05:20.317 "nvme_io": false, 00:05:20.317 "nvme_io_md": false, 00:05:20.317 "write_zeroes": true, 00:05:20.317 "zcopy": true, 00:05:20.317 "get_zone_info": false, 00:05:20.317 "zone_management": false, 00:05:20.317 "zone_append": false, 00:05:20.317 "compare": false, 00:05:20.317 "compare_and_write": false, 00:05:20.317 "abort": true, 00:05:20.317 "seek_hole": false, 00:05:20.317 "seek_data": false, 00:05:20.317 "copy": true, 00:05:20.317 "nvme_iov_md": false 00:05:20.317 }, 00:05:20.317 "memory_domains": [ 00:05:20.317 { 00:05:20.317 "dma_device_id": "system", 00:05:20.317 "dma_device_type": 1 00:05:20.317 }, 00:05:20.317 { 00:05:20.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.317 "dma_device_type": 2 00:05:20.317 } 00:05:20.317 ], 00:05:20.317 "driver_specific": {} 00:05:20.317 } 00:05:20.317 ]' 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:20.317 03:05:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.317 00:05:20.317 real 0m0.107s 00:05:20.317 user 0m0.064s 00:05:20.317 sys 0m0.007s 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.317 ************************************ 00:05:20.317 END TEST rpc_plugins 00:05:20.317 ************************************ 00:05:20.317 03:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:20.317 03:05:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.317 03:05:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.317 03:05:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.317 03:05:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.317 ************************************ 00:05:20.317 START TEST rpc_trace_cmd_test 00:05:20.317 ************************************ 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.317 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69750", 00:05:20.317 "tpoint_group_mask": "0x8", 00:05:20.317 "iscsi_conn": { 00:05:20.317 "mask": "0x2", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "scsi": { 00:05:20.317 "mask": "0x4", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "bdev": { 00:05:20.317 "mask": "0x8", 00:05:20.317 "tpoint_mask": "0xffffffffffffffff" 00:05:20.317 }, 00:05:20.317 "nvmf_rdma": { 00:05:20.317 "mask": "0x10", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "nvmf_tcp": { 00:05:20.317 "mask": "0x20", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "ftl": { 00:05:20.317 "mask": "0x40", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "blobfs": { 00:05:20.317 "mask": "0x80", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "dsa": { 00:05:20.317 "mask": "0x200", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "thread": { 00:05:20.317 "mask": "0x400", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "nvme_pcie": { 00:05:20.317 "mask": "0x800", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "iaa": { 00:05:20.317 "mask": "0x1000", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "nvme_tcp": { 00:05:20.317 "mask": "0x2000", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "bdev_nvme": { 00:05:20.317 "mask": "0x4000", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "sock": { 00:05:20.317 "mask": "0x8000", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "blob": { 00:05:20.317 "mask": "0x10000", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 }, 00:05:20.317 "bdev_raid": { 00:05:20.317 "mask": "0x20000", 00:05:20.317 "tpoint_mask": "0x0" 00:05:20.317 } 00:05:20.317 }' 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:20.317 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.577 00:05:20.577 real 0m0.171s 00:05:20.577 user 0m0.142s 00:05:20.577 sys 0m0.020s 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.577 ************************************ 00:05:20.577 END TEST rpc_trace_cmd_test 00:05:20.577 ************************************ 00:05:20.577 03:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.577 03:05:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.577 03:05:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.577 03:05:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.577 03:05:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.577 03:05:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.577 03:05:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.577 ************************************ 00:05:20.577 START TEST rpc_daemon_integrity 00:05:20.577 ************************************ 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.577 { 00:05:20.577 "name": "Malloc2", 00:05:20.577 "aliases": [ 00:05:20.577 "6574151c-4ce3-4714-9155-255a079a9ef5" 00:05:20.577 ], 00:05:20.577 "product_name": "Malloc disk", 00:05:20.577 "block_size": 512, 00:05:20.577 "num_blocks": 16384, 00:05:20.577 "uuid": "6574151c-4ce3-4714-9155-255a079a9ef5", 00:05:20.577 "assigned_rate_limits": { 00:05:20.577 "rw_ios_per_sec": 0, 00:05:20.577 "rw_mbytes_per_sec": 0, 00:05:20.577 "r_mbytes_per_sec": 0, 00:05:20.577 "w_mbytes_per_sec": 0 00:05:20.577 }, 00:05:20.577 "claimed": false, 00:05:20.577 "zoned": false, 00:05:20.577 "supported_io_types": { 00:05:20.577 "read": true, 00:05:20.577 "write": true, 00:05:20.577 "unmap": true, 00:05:20.577 "flush": true, 00:05:20.577 "reset": true, 00:05:20.577 "nvme_admin": false, 00:05:20.577 "nvme_io": false, 00:05:20.577 "nvme_io_md": false, 00:05:20.577 "write_zeroes": true, 00:05:20.577 "zcopy": true, 00:05:20.577 "get_zone_info": false, 00:05:20.577 "zone_management": false, 00:05:20.577 "zone_append": false, 00:05:20.577 "compare": false, 00:05:20.577 "compare_and_write": false, 00:05:20.577 "abort": true, 00:05:20.577 "seek_hole": false, 00:05:20.577 "seek_data": false, 00:05:20.577 "copy": true, 00:05:20.577 "nvme_iov_md": false 00:05:20.577 }, 00:05:20.577 "memory_domains": [ 00:05:20.577 { 00:05:20.577 "dma_device_id": "system", 00:05:20.577 "dma_device_type": 1 00:05:20.577 }, 00:05:20.577 { 00:05:20.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.577 "dma_device_type": 2 00:05:20.577 } 00:05:20.577 ], 00:05:20.577 "driver_specific": {} 00:05:20.577 } 00:05:20.577 ]' 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.577 [2024-11-18 03:05:24.143145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.577 [2024-11-18 03:05:24.143199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.577 [2024-11-18 03:05:24.143218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:20.577 [2024-11-18 03:05:24.143227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.577 [2024-11-18 03:05:24.145419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.577 [2024-11-18 03:05:24.145452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.577 Passthru0 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.577 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.836 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.836 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.836 { 00:05:20.836 "name": "Malloc2", 00:05:20.836 "aliases": [ 00:05:20.836 "6574151c-4ce3-4714-9155-255a079a9ef5" 00:05:20.836 ], 00:05:20.836 "product_name": "Malloc disk", 00:05:20.836 "block_size": 512, 00:05:20.836 "num_blocks": 16384, 00:05:20.836 "uuid": "6574151c-4ce3-4714-9155-255a079a9ef5", 00:05:20.836 "assigned_rate_limits": { 00:05:20.836 "rw_ios_per_sec": 0, 00:05:20.836 "rw_mbytes_per_sec": 0, 00:05:20.836 "r_mbytes_per_sec": 0, 00:05:20.836 "w_mbytes_per_sec": 0 00:05:20.836 }, 00:05:20.836 "claimed": true, 00:05:20.836 "claim_type": "exclusive_write", 00:05:20.836 "zoned": false, 00:05:20.836 "supported_io_types": { 00:05:20.836 "read": true, 00:05:20.836 "write": true, 00:05:20.836 "unmap": true, 00:05:20.836 "flush": true, 00:05:20.836 "reset": true, 00:05:20.836 "nvme_admin": false, 00:05:20.836 "nvme_io": false, 00:05:20.836 "nvme_io_md": false, 00:05:20.836 "write_zeroes": true, 00:05:20.836 "zcopy": true, 00:05:20.836 "get_zone_info": false, 00:05:20.836 "zone_management": false, 00:05:20.836 "zone_append": false, 00:05:20.836 "compare": false, 00:05:20.836 "compare_and_write": false, 00:05:20.836 "abort": true, 00:05:20.836 "seek_hole": false, 00:05:20.836 "seek_data": false, 00:05:20.836 "copy": true, 00:05:20.836 "nvme_iov_md": false 00:05:20.836 }, 00:05:20.836 "memory_domains": [ 00:05:20.836 { 00:05:20.836 "dma_device_id": "system", 00:05:20.836 "dma_device_type": 1 00:05:20.836 }, 00:05:20.836 { 00:05:20.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.836 "dma_device_type": 2 00:05:20.836 } 00:05:20.836 ], 00:05:20.836 "driver_specific": {} 00:05:20.836 }, 00:05:20.836 { 00:05:20.836 "name": "Passthru0", 00:05:20.836 "aliases": [ 00:05:20.836 "f411a53d-a6d2-5ef2-b759-f73c1fb5a725" 00:05:20.836 ], 00:05:20.836 "product_name": "passthru", 00:05:20.836 "block_size": 512, 00:05:20.836 "num_blocks": 16384, 00:05:20.836 "uuid": "f411a53d-a6d2-5ef2-b759-f73c1fb5a725", 00:05:20.836 "assigned_rate_limits": { 00:05:20.836 "rw_ios_per_sec": 0, 00:05:20.836 "rw_mbytes_per_sec": 0, 00:05:20.836 "r_mbytes_per_sec": 0, 00:05:20.836 "w_mbytes_per_sec": 0 00:05:20.836 }, 00:05:20.836 "claimed": false, 00:05:20.836 "zoned": false, 00:05:20.836 "supported_io_types": { 00:05:20.836 "read": true, 00:05:20.836 "write": true, 00:05:20.836 "unmap": true, 00:05:20.836 "flush": true, 00:05:20.836 "reset": true, 00:05:20.836 "nvme_admin": false, 00:05:20.836 "nvme_io": false, 00:05:20.836 "nvme_io_md": false, 00:05:20.836 "write_zeroes": true, 00:05:20.836 "zcopy": true, 00:05:20.836 "get_zone_info": false, 00:05:20.836 "zone_management": false, 00:05:20.836 "zone_append": false, 00:05:20.836 "compare": false, 00:05:20.836 "compare_and_write": false, 00:05:20.836 "abort": true, 00:05:20.836 "seek_hole": false, 00:05:20.836 "seek_data": false, 00:05:20.836 "copy": true, 00:05:20.836 "nvme_iov_md": false 00:05:20.836 }, 00:05:20.836 "memory_domains": [ 00:05:20.836 { 00:05:20.836 "dma_device_id": "system", 00:05:20.836 "dma_device_type": 1 00:05:20.836 }, 00:05:20.836 { 00:05:20.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.837 "dma_device_type": 2 00:05:20.837 } 00:05:20.837 ], 00:05:20.837 "driver_specific": { 00:05:20.837 "passthru": { 00:05:20.837 "name": "Passthru0", 00:05:20.837 "base_bdev_name": "Malloc2" 00:05:20.837 } 00:05:20.837 } 00:05:20.837 } 00:05:20.837 ]' 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.837 00:05:20.837 real 0m0.218s 00:05:20.837 user 0m0.121s 00:05:20.837 sys 0m0.038s 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.837 03:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.837 ************************************ 00:05:20.837 END TEST rpc_daemon_integrity 00:05:20.837 ************************************ 00:05:20.837 03:05:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:20.837 03:05:24 rpc -- rpc/rpc.sh@84 -- # killprocess 69750 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@950 -- # '[' -z 69750 ']' 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@954 -- # kill -0 69750 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69750 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.837 killing process with pid 69750 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69750' 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@969 -- # kill 69750 00:05:20.837 03:05:24 rpc -- common/autotest_common.sh@974 -- # wait 69750 00:05:21.095 00:05:21.095 real 0m2.278s 00:05:21.095 user 0m2.690s 00:05:21.095 sys 0m0.567s 00:05:21.095 03:05:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.095 03:05:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 ************************************ 00:05:21.095 END TEST rpc 00:05:21.095 ************************************ 00:05:21.095 03:05:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:21.095 03:05:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.095 03:05:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.095 03:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.095 ************************************ 00:05:21.095 START TEST skip_rpc 00:05:21.095 ************************************ 00:05:21.095 03:05:24 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:21.353 * Looking for test storage... 00:05:21.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.353 03:05:24 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.353 03:05:24 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.353 03:05:24 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.353 03:05:24 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.353 03:05:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.353 03:05:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.354 03:05:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.354 --rc genhtml_branch_coverage=1 00:05:21.354 --rc genhtml_function_coverage=1 00:05:21.354 --rc genhtml_legend=1 00:05:21.354 --rc geninfo_all_blocks=1 00:05:21.354 --rc geninfo_unexecuted_blocks=1 00:05:21.354 00:05:21.354 ' 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.354 --rc genhtml_branch_coverage=1 00:05:21.354 --rc genhtml_function_coverage=1 00:05:21.354 --rc genhtml_legend=1 00:05:21.354 --rc geninfo_all_blocks=1 00:05:21.354 --rc geninfo_unexecuted_blocks=1 00:05:21.354 00:05:21.354 ' 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.354 --rc genhtml_branch_coverage=1 00:05:21.354 --rc genhtml_function_coverage=1 00:05:21.354 --rc genhtml_legend=1 00:05:21.354 --rc geninfo_all_blocks=1 00:05:21.354 --rc geninfo_unexecuted_blocks=1 00:05:21.354 00:05:21.354 ' 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.354 --rc genhtml_branch_coverage=1 00:05:21.354 --rc genhtml_function_coverage=1 00:05:21.354 --rc genhtml_legend=1 00:05:21.354 --rc geninfo_all_blocks=1 00:05:21.354 --rc geninfo_unexecuted_blocks=1 00:05:21.354 00:05:21.354 ' 00:05:21.354 03:05:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.354 03:05:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.354 03:05:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.354 03:05:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.354 ************************************ 00:05:21.354 START TEST skip_rpc 00:05:21.354 ************************************ 00:05:21.354 03:05:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:21.354 03:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69952 00:05:21.354 03:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.354 03:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.354 03:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.354 [2024-11-18 03:05:24.880422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:21.354 [2024-11-18 03:05:24.880652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69952 ] 00:05:21.612 [2024-11-18 03:05:25.030466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.612 [2024-11-18 03:05:25.063248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69952 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69952 ']' 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69952 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69952 00:05:26.880 killing process with pid 69952 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69952' 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69952 00:05:26.880 03:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69952 00:05:26.880 00:05:26.880 real 0m5.270s 00:05:26.880 user 0m4.937s 00:05:26.880 sys 0m0.230s 00:05:26.880 03:05:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.880 03:05:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 ************************************ 00:05:26.880 END TEST skip_rpc 00:05:26.880 ************************************ 00:05:26.880 03:05:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:26.880 03:05:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.880 03:05:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.880 03:05:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.880 ************************************ 00:05:26.880 START TEST skip_rpc_with_json 00:05:26.880 ************************************ 00:05:26.880 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:26.880 03:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:26.880 03:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70039 00:05:26.880 03:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.880 03:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70039 00:05:26.880 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70039 ']' 00:05:26.881 03:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.881 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.881 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.881 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.881 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.881 03:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.881 [2024-11-18 03:05:30.176303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:26.881 [2024-11-18 03:05:30.176411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70039 ] 00:05:26.881 [2024-11-18 03:05:30.314175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.881 [2024-11-18 03:05:30.343152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.446 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.446 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:27.446 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:27.446 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.446 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.446 [2024-11-18 03:05:31.015809] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:27.705 request: 00:05:27.705 { 00:05:27.705 "trtype": "tcp", 00:05:27.705 "method": "nvmf_get_transports", 00:05:27.705 "req_id": 1 00:05:27.705 } 00:05:27.705 Got JSON-RPC error response 00:05:27.705 response: 00:05:27.705 { 00:05:27.705 "code": -19, 00:05:27.705 "message": "No such device" 00:05:27.705 } 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.705 [2024-11-18 03:05:31.023907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.705 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.705 { 00:05:27.705 "subsystems": [ 00:05:27.705 { 00:05:27.705 "subsystem": "fsdev", 00:05:27.705 "config": [ 00:05:27.705 { 00:05:27.705 "method": "fsdev_set_opts", 00:05:27.705 "params": { 00:05:27.705 "fsdev_io_pool_size": 65535, 00:05:27.705 "fsdev_io_cache_size": 256 00:05:27.705 } 00:05:27.705 } 00:05:27.705 ] 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "subsystem": "keyring", 00:05:27.705 "config": [] 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "subsystem": "iobuf", 00:05:27.705 "config": [ 00:05:27.705 { 00:05:27.705 "method": "iobuf_set_options", 00:05:27.705 "params": { 00:05:27.705 "small_pool_count": 8192, 00:05:27.705 "large_pool_count": 1024, 00:05:27.705 "small_bufsize": 8192, 00:05:27.705 "large_bufsize": 135168 00:05:27.705 } 00:05:27.705 } 00:05:27.705 ] 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "subsystem": "sock", 00:05:27.705 "config": [ 00:05:27.705 { 00:05:27.705 "method": "sock_set_default_impl", 00:05:27.705 "params": { 00:05:27.705 "impl_name": "posix" 00:05:27.705 } 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "method": "sock_impl_set_options", 00:05:27.705 "params": { 00:05:27.705 "impl_name": "ssl", 00:05:27.705 "recv_buf_size": 4096, 00:05:27.705 "send_buf_size": 4096, 00:05:27.705 "enable_recv_pipe": true, 00:05:27.705 "enable_quickack": false, 00:05:27.705 "enable_placement_id": 0, 00:05:27.705 "enable_zerocopy_send_server": true, 00:05:27.705 "enable_zerocopy_send_client": false, 00:05:27.705 "zerocopy_threshold": 0, 00:05:27.705 "tls_version": 0, 00:05:27.705 "enable_ktls": false 00:05:27.705 } 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "method": "sock_impl_set_options", 00:05:27.705 "params": { 00:05:27.705 "impl_name": "posix", 00:05:27.705 "recv_buf_size": 2097152, 00:05:27.705 "send_buf_size": 2097152, 00:05:27.705 "enable_recv_pipe": true, 00:05:27.705 "enable_quickack": false, 00:05:27.705 "enable_placement_id": 0, 00:05:27.705 "enable_zerocopy_send_server": true, 00:05:27.705 "enable_zerocopy_send_client": false, 00:05:27.705 "zerocopy_threshold": 0, 00:05:27.705 "tls_version": 0, 00:05:27.705 "enable_ktls": false 00:05:27.705 } 00:05:27.705 } 00:05:27.705 ] 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "subsystem": "vmd", 00:05:27.705 "config": [] 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "subsystem": "accel", 00:05:27.705 "config": [ 00:05:27.705 { 00:05:27.705 "method": "accel_set_options", 00:05:27.705 "params": { 00:05:27.705 "small_cache_size": 128, 00:05:27.705 "large_cache_size": 16, 00:05:27.705 "task_count": 2048, 00:05:27.705 "sequence_count": 2048, 00:05:27.705 "buf_count": 2048 00:05:27.705 } 00:05:27.705 } 00:05:27.705 ] 00:05:27.705 }, 00:05:27.705 { 00:05:27.705 "subsystem": "bdev", 00:05:27.705 "config": [ 00:05:27.705 { 00:05:27.705 "method": "bdev_set_options", 00:05:27.705 "params": { 00:05:27.705 "bdev_io_pool_size": 65535, 00:05:27.705 "bdev_io_cache_size": 256, 00:05:27.706 "bdev_auto_examine": true, 00:05:27.706 "iobuf_small_cache_size": 128, 00:05:27.706 "iobuf_large_cache_size": 16 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "bdev_raid_set_options", 00:05:27.706 "params": { 00:05:27.706 "process_window_size_kb": 1024, 00:05:27.706 "process_max_bandwidth_mb_sec": 0 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "bdev_iscsi_set_options", 00:05:27.706 "params": { 00:05:27.706 "timeout_sec": 30 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "bdev_nvme_set_options", 00:05:27.706 "params": { 00:05:27.706 "action_on_timeout": "none", 00:05:27.706 "timeout_us": 0, 00:05:27.706 "timeout_admin_us": 0, 00:05:27.706 "keep_alive_timeout_ms": 10000, 00:05:27.706 "arbitration_burst": 0, 00:05:27.706 "low_priority_weight": 0, 00:05:27.706 "medium_priority_weight": 0, 00:05:27.706 "high_priority_weight": 0, 00:05:27.706 "nvme_adminq_poll_period_us": 10000, 00:05:27.706 "nvme_ioq_poll_period_us": 0, 00:05:27.706 "io_queue_requests": 0, 00:05:27.706 "delay_cmd_submit": true, 00:05:27.706 "transport_retry_count": 4, 00:05:27.706 "bdev_retry_count": 3, 00:05:27.706 "transport_ack_timeout": 0, 00:05:27.706 "ctrlr_loss_timeout_sec": 0, 00:05:27.706 "reconnect_delay_sec": 0, 00:05:27.706 "fast_io_fail_timeout_sec": 0, 00:05:27.706 "disable_auto_failback": false, 00:05:27.706 "generate_uuids": false, 00:05:27.706 "transport_tos": 0, 00:05:27.706 "nvme_error_stat": false, 00:05:27.706 "rdma_srq_size": 0, 00:05:27.706 "io_path_stat": false, 00:05:27.706 "allow_accel_sequence": false, 00:05:27.706 "rdma_max_cq_size": 0, 00:05:27.706 "rdma_cm_event_timeout_ms": 0, 00:05:27.706 "dhchap_digests": [ 00:05:27.706 "sha256", 00:05:27.706 "sha384", 00:05:27.706 "sha512" 00:05:27.706 ], 00:05:27.706 "dhchap_dhgroups": [ 00:05:27.706 "null", 00:05:27.706 "ffdhe2048", 00:05:27.706 "ffdhe3072", 00:05:27.706 "ffdhe4096", 00:05:27.706 "ffdhe6144", 00:05:27.706 "ffdhe8192" 00:05:27.706 ] 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "bdev_nvme_set_hotplug", 00:05:27.706 "params": { 00:05:27.706 "period_us": 100000, 00:05:27.706 "enable": false 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "bdev_wait_for_examine" 00:05:27.706 } 00:05:27.706 ] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "scsi", 00:05:27.706 "config": null 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "scheduler", 00:05:27.706 "config": [ 00:05:27.706 { 00:05:27.706 "method": "framework_set_scheduler", 00:05:27.706 "params": { 00:05:27.706 "name": "static" 00:05:27.706 } 00:05:27.706 } 00:05:27.706 ] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "vhost_scsi", 00:05:27.706 "config": [] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "vhost_blk", 00:05:27.706 "config": [] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "ublk", 00:05:27.706 "config": [] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "nbd", 00:05:27.706 "config": [] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "nvmf", 00:05:27.706 "config": [ 00:05:27.706 { 00:05:27.706 "method": "nvmf_set_config", 00:05:27.706 "params": { 00:05:27.706 "discovery_filter": "match_any", 00:05:27.706 "admin_cmd_passthru": { 00:05:27.706 "identify_ctrlr": false 00:05:27.706 }, 00:05:27.706 "dhchap_digests": [ 00:05:27.706 "sha256", 00:05:27.706 "sha384", 00:05:27.706 "sha512" 00:05:27.706 ], 00:05:27.706 "dhchap_dhgroups": [ 00:05:27.706 "null", 00:05:27.706 "ffdhe2048", 00:05:27.706 "ffdhe3072", 00:05:27.706 "ffdhe4096", 00:05:27.706 "ffdhe6144", 00:05:27.706 "ffdhe8192" 00:05:27.706 ] 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "nvmf_set_max_subsystems", 00:05:27.706 "params": { 00:05:27.706 "max_subsystems": 1024 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "nvmf_set_crdt", 00:05:27.706 "params": { 00:05:27.706 "crdt1": 0, 00:05:27.706 "crdt2": 0, 00:05:27.706 "crdt3": 0 00:05:27.706 } 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "method": "nvmf_create_transport", 00:05:27.706 "params": { 00:05:27.706 "trtype": "TCP", 00:05:27.706 "max_queue_depth": 128, 00:05:27.706 "max_io_qpairs_per_ctrlr": 127, 00:05:27.706 "in_capsule_data_size": 4096, 00:05:27.706 "max_io_size": 131072, 00:05:27.706 "io_unit_size": 131072, 00:05:27.706 "max_aq_depth": 128, 00:05:27.706 "num_shared_buffers": 511, 00:05:27.706 "buf_cache_size": 4294967295, 00:05:27.706 "dif_insert_or_strip": false, 00:05:27.706 "zcopy": false, 00:05:27.706 "c2h_success": true, 00:05:27.706 "sock_priority": 0, 00:05:27.706 "abort_timeout_sec": 1, 00:05:27.706 "ack_timeout": 0, 00:05:27.706 "data_wr_pool_size": 0 00:05:27.706 } 00:05:27.706 } 00:05:27.706 ] 00:05:27.706 }, 00:05:27.706 { 00:05:27.706 "subsystem": "iscsi", 00:05:27.706 "config": [ 00:05:27.706 { 00:05:27.706 "method": "iscsi_set_options", 00:05:27.706 "params": { 00:05:27.706 "node_base": "iqn.2016-06.io.spdk", 00:05:27.706 "max_sessions": 128, 00:05:27.706 "max_connections_per_session": 2, 00:05:27.706 "max_queue_depth": 64, 00:05:27.706 "default_time2wait": 2, 00:05:27.706 "default_time2retain": 20, 00:05:27.706 "first_burst_length": 8192, 00:05:27.706 "immediate_data": true, 00:05:27.706 "allow_duplicated_isid": false, 00:05:27.706 "error_recovery_level": 0, 00:05:27.706 "nop_timeout": 60, 00:05:27.706 "nop_in_interval": 30, 00:05:27.706 "disable_chap": false, 00:05:27.706 "require_chap": false, 00:05:27.706 "mutual_chap": false, 00:05:27.706 "chap_group": 0, 00:05:27.706 "max_large_datain_per_connection": 64, 00:05:27.706 "max_r2t_per_connection": 4, 00:05:27.706 "pdu_pool_size": 36864, 00:05:27.707 "immediate_data_pool_size": 16384, 00:05:27.707 "data_out_pool_size": 2048 00:05:27.707 } 00:05:27.707 } 00:05:27.707 ] 00:05:27.707 } 00:05:27.707 ] 00:05:27.707 } 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70039 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70039 ']' 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70039 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70039 00:05:27.707 killing process with pid 70039 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70039' 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70039 00:05:27.707 03:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70039 00:05:27.966 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70062 00:05:27.966 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:27.966 03:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70062 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70062 ']' 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70062 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70062 00:05:33.258 killing process with pid 70062 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70062' 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70062 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70062 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.258 ************************************ 00:05:33.258 00:05:33.258 real 0m6.604s 00:05:33.258 user 0m6.315s 00:05:33.258 sys 0m0.510s 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.258 END TEST skip_rpc_with_json 00:05:33.258 ************************************ 00:05:33.258 03:05:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:33.258 03:05:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.258 03:05:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.258 03:05:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.258 ************************************ 00:05:33.258 START TEST skip_rpc_with_delay 00:05:33.258 ************************************ 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:33.258 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.520 [2024-11-18 03:05:36.843533] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:33.520 [2024-11-18 03:05:36.844174] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:33.520 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:33.520 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.520 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.520 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.520 00:05:33.520 real 0m0.125s 00:05:33.520 user 0m0.062s 00:05:33.520 sys 0m0.061s 00:05:33.520 ************************************ 00:05:33.520 END TEST skip_rpc_with_delay 00:05:33.520 ************************************ 00:05:33.520 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.520 03:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:33.520 03:05:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:33.520 03:05:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:33.520 03:05:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:33.520 03:05:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.520 03:05:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.520 03:05:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.520 ************************************ 00:05:33.520 START TEST exit_on_failed_rpc_init 00:05:33.520 ************************************ 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70174 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70174 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70174 ']' 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.520 03:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.520 [2024-11-18 03:05:37.042135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:33.520 [2024-11-18 03:05:37.042280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70174 ] 00:05:33.781 [2024-11-18 03:05:37.195356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.781 [2024-11-18 03:05:37.246537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:34.376 03:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.636 [2024-11-18 03:05:37.977815] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:34.636 [2024-11-18 03:05:37.977966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70192 ] 00:05:34.636 [2024-11-18 03:05:38.130326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.636 [2024-11-18 03:05:38.178791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.636 [2024-11-18 03:05:38.178899] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:34.636 [2024-11-18 03:05:38.178917] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:34.636 [2024-11-18 03:05:38.178929] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70174 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70174 ']' 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70174 00:05:34.896 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70174 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.897 killing process with pid 70174 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70174' 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70174 00:05:34.897 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70174 00:05:35.157 00:05:35.157 real 0m1.690s 00:05:35.157 user 0m1.823s 00:05:35.157 sys 0m0.487s 00:05:35.157 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.157 ************************************ 00:05:35.157 END TEST exit_on_failed_rpc_init 00:05:35.157 ************************************ 00:05:35.157 03:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 03:05:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.157 00:05:35.157 real 0m14.052s 00:05:35.157 user 0m13.275s 00:05:35.157 sys 0m1.466s 00:05:35.157 03:05:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.157 03:05:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.157 ************************************ 00:05:35.157 END TEST skip_rpc 00:05:35.157 ************************************ 00:05:35.418 03:05:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.418 03:05:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.418 03:05:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.418 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.418 ************************************ 00:05:35.418 START TEST rpc_client 00:05:35.418 ************************************ 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.418 * Looking for test storage... 00:05:35.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.418 03:05:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.418 --rc genhtml_branch_coverage=1 00:05:35.418 --rc genhtml_function_coverage=1 00:05:35.418 --rc genhtml_legend=1 00:05:35.418 --rc geninfo_all_blocks=1 00:05:35.418 --rc geninfo_unexecuted_blocks=1 00:05:35.418 00:05:35.418 ' 00:05:35.418 03:05:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:35.418 OK 00:05:35.418 03:05:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.418 00:05:35.418 real 0m0.192s 00:05:35.418 user 0m0.115s 00:05:35.418 sys 0m0.082s 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.418 ************************************ 00:05:35.418 END TEST rpc_client 00:05:35.418 ************************************ 00:05:35.418 03:05:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.679 03:05:38 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.679 03:05:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.679 03:05:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.679 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.679 ************************************ 00:05:35.679 START TEST json_config 00:05:35.679 ************************************ 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.679 03:05:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.679 03:05:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.679 03:05:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.679 03:05:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.679 03:05:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.679 03:05:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.679 03:05:39 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.679 03:05:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.679 03:05:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.679 03:05:39 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.679 03:05:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.679 03:05:39 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.679 03:05:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.679 03:05:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.679 03:05:39 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 03:05:39 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.679 --rc genhtml_branch_coverage=1 00:05:35.679 --rc genhtml_function_coverage=1 00:05:35.679 --rc genhtml_legend=1 00:05:35.679 --rc geninfo_all_blocks=1 00:05:35.679 --rc geninfo_unexecuted_blocks=1 00:05:35.679 00:05:35.679 ' 00:05:35.679 03:05:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01d7ee47-a46b-4936-b643-475f931e6943 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=01d7ee47-a46b-4936-b643-475f931e6943 00:05:35.679 03:05:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.680 03:05:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.680 03:05:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.680 03:05:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.680 03:05:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.680 03:05:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.680 03:05:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.680 03:05:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.680 03:05:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.680 03:05:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.680 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.680 03:05:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.680 WARNING: No tests are enabled so not running JSON configuration tests 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:35.680 03:05:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:35.680 00:05:35.680 real 0m0.146s 00:05:35.680 user 0m0.092s 00:05:35.680 sys 0m0.057s 00:05:35.680 03:05:39 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.680 ************************************ 00:05:35.680 END TEST json_config 00:05:35.680 ************************************ 00:05:35.680 03:05:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 03:05:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.680 03:05:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.680 03:05:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.680 03:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 ************************************ 00:05:35.680 START TEST json_config_extra_key 00:05:35.680 ************************************ 00:05:35.680 03:05:39 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.680 03:05:39 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.680 03:05:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.680 03:05:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.941 03:05:39 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:35.941 03:05:39 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.941 03:05:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.941 --rc genhtml_branch_coverage=1 00:05:35.941 --rc genhtml_function_coverage=1 00:05:35.941 --rc genhtml_legend=1 00:05:35.941 --rc geninfo_all_blocks=1 00:05:35.941 --rc geninfo_unexecuted_blocks=1 00:05:35.941 00:05:35.941 ' 00:05:35.941 03:05:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.941 --rc genhtml_branch_coverage=1 00:05:35.941 --rc genhtml_function_coverage=1 00:05:35.941 --rc genhtml_legend=1 00:05:35.941 --rc geninfo_all_blocks=1 00:05:35.941 --rc geninfo_unexecuted_blocks=1 00:05:35.941 00:05:35.941 ' 00:05:35.941 03:05:39 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.941 --rc genhtml_branch_coverage=1 00:05:35.941 --rc genhtml_function_coverage=1 00:05:35.941 --rc genhtml_legend=1 00:05:35.941 --rc geninfo_all_blocks=1 00:05:35.941 --rc geninfo_unexecuted_blocks=1 00:05:35.941 00:05:35.941 ' 00:05:35.941 03:05:39 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.941 --rc genhtml_branch_coverage=1 00:05:35.941 --rc genhtml_function_coverage=1 00:05:35.941 --rc genhtml_legend=1 00:05:35.941 --rc geninfo_all_blocks=1 00:05:35.941 --rc geninfo_unexecuted_blocks=1 00:05:35.941 00:05:35.941 ' 00:05:35.941 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:01d7ee47-a46b-4936-b643-475f931e6943 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=01d7ee47-a46b-4936-b643-475f931e6943 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.941 03:05:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.941 03:05:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.941 03:05:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.941 03:05:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.941 03:05:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.941 03:05:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.941 03:05:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.941 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.941 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.941 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.942 INFO: launching applications... 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.942 03:05:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.942 Waiting for target to run... 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70374 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70374 /var/tmp/spdk_tgt.sock 00:05:35.942 03:05:39 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70374 ']' 00:05:35.942 03:05:39 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.942 03:05:39 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.942 03:05:39 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.942 03:05:39 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.942 03:05:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.942 03:05:39 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.942 [2024-11-18 03:05:39.446952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:35.942 [2024-11-18 03:05:39.447109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70374 ] 00:05:36.513 [2024-11-18 03:05:39.828053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.513 [2024-11-18 03:05:39.854726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.773 03:05:40 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.773 03:05:40 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:36.773 00:05:36.773 INFO: shutting down applications... 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.773 03:05:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.773 03:05:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70374 ]] 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70374 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70374 00:05:36.773 03:05:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70374 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.343 SPDK target shutdown done 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.343 03:05:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.343 Success 00:05:37.343 03:05:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.343 00:05:37.343 real 0m1.555s 00:05:37.343 user 0m1.177s 00:05:37.343 sys 0m0.454s 00:05:37.343 03:05:40 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.343 03:05:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.343 ************************************ 00:05:37.343 END TEST json_config_extra_key 00:05:37.343 ************************************ 00:05:37.343 03:05:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.343 03:05:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.343 03:05:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.343 03:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:37.343 ************************************ 00:05:37.343 START TEST alias_rpc 00:05:37.343 ************************************ 00:05:37.343 03:05:40 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.343 * Looking for test storage... 00:05:37.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:37.343 03:05:40 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.343 03:05:40 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.343 03:05:40 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.605 03:05:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.605 --rc genhtml_branch_coverage=1 00:05:37.605 --rc genhtml_function_coverage=1 00:05:37.605 --rc genhtml_legend=1 00:05:37.605 --rc geninfo_all_blocks=1 00:05:37.605 --rc geninfo_unexecuted_blocks=1 00:05:37.605 00:05:37.605 ' 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.605 --rc genhtml_branch_coverage=1 00:05:37.605 --rc genhtml_function_coverage=1 00:05:37.605 --rc genhtml_legend=1 00:05:37.605 --rc geninfo_all_blocks=1 00:05:37.605 --rc geninfo_unexecuted_blocks=1 00:05:37.605 00:05:37.605 ' 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.605 --rc genhtml_branch_coverage=1 00:05:37.605 --rc genhtml_function_coverage=1 00:05:37.605 --rc genhtml_legend=1 00:05:37.605 --rc geninfo_all_blocks=1 00:05:37.605 --rc geninfo_unexecuted_blocks=1 00:05:37.605 00:05:37.605 ' 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.605 --rc genhtml_branch_coverage=1 00:05:37.605 --rc genhtml_function_coverage=1 00:05:37.605 --rc genhtml_legend=1 00:05:37.605 --rc geninfo_all_blocks=1 00:05:37.605 --rc geninfo_unexecuted_blocks=1 00:05:37.605 00:05:37.605 ' 00:05:37.605 03:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.605 03:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70448 00:05:37.605 03:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70448 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70448 ']' 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.605 03:05:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.605 03:05:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.605 [2024-11-18 03:05:41.037994] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:37.605 [2024-11-18 03:05:41.038141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70448 ] 00:05:37.866 [2024-11-18 03:05:41.188529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.866 [2024-11-18 03:05:41.238428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.436 03:05:41 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.436 03:05:41 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:38.436 03:05:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:38.695 03:05:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70448 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70448 ']' 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70448 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70448 00:05:38.695 killing process with pid 70448 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70448' 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@969 -- # kill 70448 00:05:38.695 03:05:42 alias_rpc -- common/autotest_common.sh@974 -- # wait 70448 00:05:38.955 ************************************ 00:05:38.955 END TEST alias_rpc 00:05:38.955 ************************************ 00:05:38.955 00:05:38.955 real 0m1.646s 00:05:38.955 user 0m1.644s 00:05:38.955 sys 0m0.471s 00:05:38.955 03:05:42 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.955 03:05:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.955 03:05:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:38.955 03:05:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:38.955 03:05:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.955 03:05:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.955 03:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.955 ************************************ 00:05:38.955 START TEST spdkcli_tcp 00:05:38.955 ************************************ 00:05:38.955 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.215 * Looking for test storage... 00:05:39.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.215 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.215 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.215 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.215 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.215 03:05:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:39.215 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.215 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.216 --rc genhtml_branch_coverage=1 00:05:39.216 --rc genhtml_function_coverage=1 00:05:39.216 --rc genhtml_legend=1 00:05:39.216 --rc geninfo_all_blocks=1 00:05:39.216 --rc geninfo_unexecuted_blocks=1 00:05:39.216 00:05:39.216 ' 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.216 --rc genhtml_branch_coverage=1 00:05:39.216 --rc genhtml_function_coverage=1 00:05:39.216 --rc genhtml_legend=1 00:05:39.216 --rc geninfo_all_blocks=1 00:05:39.216 --rc geninfo_unexecuted_blocks=1 00:05:39.216 00:05:39.216 ' 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.216 --rc genhtml_branch_coverage=1 00:05:39.216 --rc genhtml_function_coverage=1 00:05:39.216 --rc genhtml_legend=1 00:05:39.216 --rc geninfo_all_blocks=1 00:05:39.216 --rc geninfo_unexecuted_blocks=1 00:05:39.216 00:05:39.216 ' 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.216 --rc genhtml_branch_coverage=1 00:05:39.216 --rc genhtml_function_coverage=1 00:05:39.216 --rc genhtml_legend=1 00:05:39.216 --rc geninfo_all_blocks=1 00:05:39.216 --rc geninfo_unexecuted_blocks=1 00:05:39.216 00:05:39.216 ' 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70527 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70527 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70527 ']' 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.216 03:05:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 03:05:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.216 [2024-11-18 03:05:42.744642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.216 [2024-11-18 03:05:42.744777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70527 ] 00:05:39.476 [2024-11-18 03:05:42.896493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.476 [2024-11-18 03:05:42.948038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.476 [2024-11-18 03:05:42.948125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.420 03:05:43 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.420 03:05:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:40.420 03:05:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70544 00:05:40.420 03:05:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:40.420 03:05:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:40.420 [ 00:05:40.420 "bdev_malloc_delete", 00:05:40.420 "bdev_malloc_create", 00:05:40.420 "bdev_null_resize", 00:05:40.420 "bdev_null_delete", 00:05:40.420 "bdev_null_create", 00:05:40.420 "bdev_nvme_cuse_unregister", 00:05:40.420 "bdev_nvme_cuse_register", 00:05:40.420 "bdev_opal_new_user", 00:05:40.420 "bdev_opal_set_lock_state", 00:05:40.420 "bdev_opal_delete", 00:05:40.420 "bdev_opal_get_info", 00:05:40.420 "bdev_opal_create", 00:05:40.420 "bdev_nvme_opal_revert", 00:05:40.420 "bdev_nvme_opal_init", 00:05:40.420 "bdev_nvme_send_cmd", 00:05:40.420 "bdev_nvme_set_keys", 00:05:40.420 "bdev_nvme_get_path_iostat", 00:05:40.420 "bdev_nvme_get_mdns_discovery_info", 00:05:40.420 "bdev_nvme_stop_mdns_discovery", 00:05:40.420 "bdev_nvme_start_mdns_discovery", 00:05:40.420 "bdev_nvme_set_multipath_policy", 00:05:40.420 "bdev_nvme_set_preferred_path", 00:05:40.420 "bdev_nvme_get_io_paths", 00:05:40.420 "bdev_nvme_remove_error_injection", 00:05:40.420 "bdev_nvme_add_error_injection", 00:05:40.420 "bdev_nvme_get_discovery_info", 00:05:40.420 "bdev_nvme_stop_discovery", 00:05:40.420 "bdev_nvme_start_discovery", 00:05:40.420 "bdev_nvme_get_controller_health_info", 00:05:40.420 "bdev_nvme_disable_controller", 00:05:40.420 "bdev_nvme_enable_controller", 00:05:40.420 "bdev_nvme_reset_controller", 00:05:40.420 "bdev_nvme_get_transport_statistics", 00:05:40.420 "bdev_nvme_apply_firmware", 00:05:40.420 "bdev_nvme_detach_controller", 00:05:40.421 "bdev_nvme_get_controllers", 00:05:40.421 "bdev_nvme_attach_controller", 00:05:40.421 "bdev_nvme_set_hotplug", 00:05:40.421 "bdev_nvme_set_options", 00:05:40.421 "bdev_passthru_delete", 00:05:40.421 "bdev_passthru_create", 00:05:40.421 "bdev_lvol_set_parent_bdev", 00:05:40.421 "bdev_lvol_set_parent", 00:05:40.421 "bdev_lvol_check_shallow_copy", 00:05:40.421 "bdev_lvol_start_shallow_copy", 00:05:40.421 "bdev_lvol_grow_lvstore", 00:05:40.421 "bdev_lvol_get_lvols", 00:05:40.421 "bdev_lvol_get_lvstores", 00:05:40.421 "bdev_lvol_delete", 00:05:40.421 "bdev_lvol_set_read_only", 00:05:40.421 "bdev_lvol_resize", 00:05:40.421 "bdev_lvol_decouple_parent", 00:05:40.421 "bdev_lvol_inflate", 00:05:40.421 "bdev_lvol_rename", 00:05:40.421 "bdev_lvol_clone_bdev", 00:05:40.421 "bdev_lvol_clone", 00:05:40.421 "bdev_lvol_snapshot", 00:05:40.421 "bdev_lvol_create", 00:05:40.421 "bdev_lvol_delete_lvstore", 00:05:40.421 "bdev_lvol_rename_lvstore", 00:05:40.421 "bdev_lvol_create_lvstore", 00:05:40.421 "bdev_raid_set_options", 00:05:40.421 "bdev_raid_remove_base_bdev", 00:05:40.421 "bdev_raid_add_base_bdev", 00:05:40.421 "bdev_raid_delete", 00:05:40.421 "bdev_raid_create", 00:05:40.421 "bdev_raid_get_bdevs", 00:05:40.421 "bdev_error_inject_error", 00:05:40.421 "bdev_error_delete", 00:05:40.421 "bdev_error_create", 00:05:40.421 "bdev_split_delete", 00:05:40.421 "bdev_split_create", 00:05:40.421 "bdev_delay_delete", 00:05:40.421 "bdev_delay_create", 00:05:40.421 "bdev_delay_update_latency", 00:05:40.421 "bdev_zone_block_delete", 00:05:40.421 "bdev_zone_block_create", 00:05:40.421 "blobfs_create", 00:05:40.421 "blobfs_detect", 00:05:40.421 "blobfs_set_cache_size", 00:05:40.421 "bdev_xnvme_delete", 00:05:40.421 "bdev_xnvme_create", 00:05:40.421 "bdev_aio_delete", 00:05:40.421 "bdev_aio_rescan", 00:05:40.421 "bdev_aio_create", 00:05:40.421 "bdev_ftl_set_property", 00:05:40.421 "bdev_ftl_get_properties", 00:05:40.421 "bdev_ftl_get_stats", 00:05:40.421 "bdev_ftl_unmap", 00:05:40.421 "bdev_ftl_unload", 00:05:40.421 "bdev_ftl_delete", 00:05:40.421 "bdev_ftl_load", 00:05:40.421 "bdev_ftl_create", 00:05:40.421 "bdev_virtio_attach_controller", 00:05:40.421 "bdev_virtio_scsi_get_devices", 00:05:40.421 "bdev_virtio_detach_controller", 00:05:40.421 "bdev_virtio_blk_set_hotplug", 00:05:40.421 "bdev_iscsi_delete", 00:05:40.421 "bdev_iscsi_create", 00:05:40.421 "bdev_iscsi_set_options", 00:05:40.421 "accel_error_inject_error", 00:05:40.421 "ioat_scan_accel_module", 00:05:40.421 "dsa_scan_accel_module", 00:05:40.421 "iaa_scan_accel_module", 00:05:40.421 "keyring_file_remove_key", 00:05:40.421 "keyring_file_add_key", 00:05:40.421 "keyring_linux_set_options", 00:05:40.421 "fsdev_aio_delete", 00:05:40.421 "fsdev_aio_create", 00:05:40.421 "iscsi_get_histogram", 00:05:40.421 "iscsi_enable_histogram", 00:05:40.421 "iscsi_set_options", 00:05:40.421 "iscsi_get_auth_groups", 00:05:40.421 "iscsi_auth_group_remove_secret", 00:05:40.421 "iscsi_auth_group_add_secret", 00:05:40.421 "iscsi_delete_auth_group", 00:05:40.421 "iscsi_create_auth_group", 00:05:40.421 "iscsi_set_discovery_auth", 00:05:40.421 "iscsi_get_options", 00:05:40.421 "iscsi_target_node_request_logout", 00:05:40.421 "iscsi_target_node_set_redirect", 00:05:40.421 "iscsi_target_node_set_auth", 00:05:40.421 "iscsi_target_node_add_lun", 00:05:40.421 "iscsi_get_stats", 00:05:40.421 "iscsi_get_connections", 00:05:40.421 "iscsi_portal_group_set_auth", 00:05:40.421 "iscsi_start_portal_group", 00:05:40.421 "iscsi_delete_portal_group", 00:05:40.421 "iscsi_create_portal_group", 00:05:40.421 "iscsi_get_portal_groups", 00:05:40.421 "iscsi_delete_target_node", 00:05:40.421 "iscsi_target_node_remove_pg_ig_maps", 00:05:40.421 "iscsi_target_node_add_pg_ig_maps", 00:05:40.421 "iscsi_create_target_node", 00:05:40.421 "iscsi_get_target_nodes", 00:05:40.421 "iscsi_delete_initiator_group", 00:05:40.421 "iscsi_initiator_group_remove_initiators", 00:05:40.421 "iscsi_initiator_group_add_initiators", 00:05:40.421 "iscsi_create_initiator_group", 00:05:40.421 "iscsi_get_initiator_groups", 00:05:40.421 "nvmf_set_crdt", 00:05:40.421 "nvmf_set_config", 00:05:40.421 "nvmf_set_max_subsystems", 00:05:40.421 "nvmf_stop_mdns_prr", 00:05:40.421 "nvmf_publish_mdns_prr", 00:05:40.421 "nvmf_subsystem_get_listeners", 00:05:40.421 "nvmf_subsystem_get_qpairs", 00:05:40.421 "nvmf_subsystem_get_controllers", 00:05:40.421 "nvmf_get_stats", 00:05:40.421 "nvmf_get_transports", 00:05:40.421 "nvmf_create_transport", 00:05:40.421 "nvmf_get_targets", 00:05:40.421 "nvmf_delete_target", 00:05:40.421 "nvmf_create_target", 00:05:40.421 "nvmf_subsystem_allow_any_host", 00:05:40.421 "nvmf_subsystem_set_keys", 00:05:40.421 "nvmf_subsystem_remove_host", 00:05:40.421 "nvmf_subsystem_add_host", 00:05:40.421 "nvmf_ns_remove_host", 00:05:40.421 "nvmf_ns_add_host", 00:05:40.421 "nvmf_subsystem_remove_ns", 00:05:40.421 "nvmf_subsystem_set_ns_ana_group", 00:05:40.421 "nvmf_subsystem_add_ns", 00:05:40.421 "nvmf_subsystem_listener_set_ana_state", 00:05:40.421 "nvmf_discovery_get_referrals", 00:05:40.421 "nvmf_discovery_remove_referral", 00:05:40.421 "nvmf_discovery_add_referral", 00:05:40.421 "nvmf_subsystem_remove_listener", 00:05:40.421 "nvmf_subsystem_add_listener", 00:05:40.421 "nvmf_delete_subsystem", 00:05:40.421 "nvmf_create_subsystem", 00:05:40.421 "nvmf_get_subsystems", 00:05:40.421 "env_dpdk_get_mem_stats", 00:05:40.421 "nbd_get_disks", 00:05:40.421 "nbd_stop_disk", 00:05:40.421 "nbd_start_disk", 00:05:40.421 "ublk_recover_disk", 00:05:40.421 "ublk_get_disks", 00:05:40.421 "ublk_stop_disk", 00:05:40.421 "ublk_start_disk", 00:05:40.421 "ublk_destroy_target", 00:05:40.421 "ublk_create_target", 00:05:40.421 "virtio_blk_create_transport", 00:05:40.421 "virtio_blk_get_transports", 00:05:40.421 "vhost_controller_set_coalescing", 00:05:40.421 "vhost_get_controllers", 00:05:40.421 "vhost_delete_controller", 00:05:40.421 "vhost_create_blk_controller", 00:05:40.421 "vhost_scsi_controller_remove_target", 00:05:40.421 "vhost_scsi_controller_add_target", 00:05:40.421 "vhost_start_scsi_controller", 00:05:40.421 "vhost_create_scsi_controller", 00:05:40.421 "thread_set_cpumask", 00:05:40.421 "scheduler_set_options", 00:05:40.421 "framework_get_governor", 00:05:40.421 "framework_get_scheduler", 00:05:40.421 "framework_set_scheduler", 00:05:40.421 "framework_get_reactors", 00:05:40.421 "thread_get_io_channels", 00:05:40.421 "thread_get_pollers", 00:05:40.421 "thread_get_stats", 00:05:40.421 "framework_monitor_context_switch", 00:05:40.421 "spdk_kill_instance", 00:05:40.421 "log_enable_timestamps", 00:05:40.421 "log_get_flags", 00:05:40.421 "log_clear_flag", 00:05:40.421 "log_set_flag", 00:05:40.421 "log_get_level", 00:05:40.421 "log_set_level", 00:05:40.421 "log_get_print_level", 00:05:40.421 "log_set_print_level", 00:05:40.421 "framework_enable_cpumask_locks", 00:05:40.421 "framework_disable_cpumask_locks", 00:05:40.421 "framework_wait_init", 00:05:40.421 "framework_start_init", 00:05:40.421 "scsi_get_devices", 00:05:40.421 "bdev_get_histogram", 00:05:40.421 "bdev_enable_histogram", 00:05:40.421 "bdev_set_qos_limit", 00:05:40.421 "bdev_set_qd_sampling_period", 00:05:40.421 "bdev_get_bdevs", 00:05:40.421 "bdev_reset_iostat", 00:05:40.421 "bdev_get_iostat", 00:05:40.421 "bdev_examine", 00:05:40.421 "bdev_wait_for_examine", 00:05:40.421 "bdev_set_options", 00:05:40.421 "accel_get_stats", 00:05:40.421 "accel_set_options", 00:05:40.421 "accel_set_driver", 00:05:40.421 "accel_crypto_key_destroy", 00:05:40.421 "accel_crypto_keys_get", 00:05:40.421 "accel_crypto_key_create", 00:05:40.421 "accel_assign_opc", 00:05:40.421 "accel_get_module_info", 00:05:40.421 "accel_get_opc_assignments", 00:05:40.421 "vmd_rescan", 00:05:40.421 "vmd_remove_device", 00:05:40.421 "vmd_enable", 00:05:40.421 "sock_get_default_impl", 00:05:40.421 "sock_set_default_impl", 00:05:40.421 "sock_impl_set_options", 00:05:40.421 "sock_impl_get_options", 00:05:40.421 "iobuf_get_stats", 00:05:40.421 "iobuf_set_options", 00:05:40.421 "keyring_get_keys", 00:05:40.421 "framework_get_pci_devices", 00:05:40.421 "framework_get_config", 00:05:40.421 "framework_get_subsystems", 00:05:40.421 "fsdev_set_opts", 00:05:40.421 "fsdev_get_opts", 00:05:40.421 "trace_get_info", 00:05:40.421 "trace_get_tpoint_group_mask", 00:05:40.421 "trace_disable_tpoint_group", 00:05:40.421 "trace_enable_tpoint_group", 00:05:40.421 "trace_clear_tpoint_mask", 00:05:40.421 "trace_set_tpoint_mask", 00:05:40.421 "notify_get_notifications", 00:05:40.421 "notify_get_types", 00:05:40.421 "spdk_get_version", 00:05:40.421 "rpc_get_methods" 00:05:40.421 ] 00:05:40.421 03:05:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:40.421 03:05:43 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.421 03:05:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.421 03:05:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:40.421 03:05:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70527 00:05:40.421 03:05:43 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70527 ']' 00:05:40.421 03:05:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70527 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70527 00:05:40.422 killing process with pid 70527 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70527' 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70527 00:05:40.422 03:05:43 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70527 00:05:40.996 ************************************ 00:05:40.996 END TEST spdkcli_tcp 00:05:40.996 ************************************ 00:05:40.996 00:05:40.996 real 0m1.834s 00:05:40.996 user 0m3.188s 00:05:40.996 sys 0m0.535s 00:05:40.996 03:05:44 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.996 03:05:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.996 03:05:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.996 03:05:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.996 03:05:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.996 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:40.996 ************************************ 00:05:40.996 START TEST dpdk_mem_utility 00:05:40.996 ************************************ 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.996 * Looking for test storage... 00:05:40.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.996 03:05:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.996 --rc genhtml_branch_coverage=1 00:05:40.996 --rc genhtml_function_coverage=1 00:05:40.996 --rc genhtml_legend=1 00:05:40.996 --rc geninfo_all_blocks=1 00:05:40.996 --rc geninfo_unexecuted_blocks=1 00:05:40.996 00:05:40.996 ' 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.996 --rc genhtml_branch_coverage=1 00:05:40.996 --rc genhtml_function_coverage=1 00:05:40.996 --rc genhtml_legend=1 00:05:40.996 --rc geninfo_all_blocks=1 00:05:40.996 --rc geninfo_unexecuted_blocks=1 00:05:40.996 00:05:40.996 ' 00:05:40.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.996 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.996 --rc genhtml_branch_coverage=1 00:05:40.996 --rc genhtml_function_coverage=1 00:05:40.996 --rc genhtml_legend=1 00:05:40.996 --rc geninfo_all_blocks=1 00:05:40.996 --rc geninfo_unexecuted_blocks=1 00:05:40.996 00:05:40.996 ' 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.997 --rc genhtml_branch_coverage=1 00:05:40.997 --rc genhtml_function_coverage=1 00:05:40.997 --rc genhtml_legend=1 00:05:40.997 --rc geninfo_all_blocks=1 00:05:40.997 --rc geninfo_unexecuted_blocks=1 00:05:40.997 00:05:40.997 ' 00:05:40.997 03:05:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:40.997 03:05:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70627 00:05:40.997 03:05:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70627 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70627 ']' 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.997 03:05:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.997 03:05:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.259 [2024-11-18 03:05:44.645355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:41.259 [2024-11-18 03:05:44.645522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70627 ] 00:05:41.259 [2024-11-18 03:05:44.796939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.520 [2024-11-18 03:05:44.850297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.095 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.095 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:42.095 03:05:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:42.095 03:05:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:42.095 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.095 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.095 { 00:05:42.095 "filename": "/tmp/spdk_mem_dump.txt" 00:05:42.095 } 00:05:42.095 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.095 03:05:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.095 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:42.095 1 heaps totaling size 860.000000 MiB 00:05:42.095 size: 860.000000 MiB heap id: 0 00:05:42.095 end heaps---------- 00:05:42.095 9 mempools totaling size 642.649841 MiB 00:05:42.095 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:42.095 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:42.095 size: 92.545471 MiB name: bdev_io_70627 00:05:42.095 size: 51.011292 MiB name: evtpool_70627 00:05:42.095 size: 50.003479 MiB name: msgpool_70627 00:05:42.095 size: 36.509338 MiB name: fsdev_io_70627 00:05:42.095 size: 21.763794 MiB name: PDU_Pool 00:05:42.095 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:42.095 size: 0.026123 MiB name: Session_Pool 00:05:42.095 end mempools------- 00:05:42.095 6 memzones totaling size 4.142822 MiB 00:05:42.095 size: 1.000366 MiB name: RG_ring_0_70627 00:05:42.095 size: 1.000366 MiB name: RG_ring_1_70627 00:05:42.095 size: 1.000366 MiB name: RG_ring_4_70627 00:05:42.095 size: 1.000366 MiB name: RG_ring_5_70627 00:05:42.095 size: 0.125366 MiB name: RG_ring_2_70627 00:05:42.095 size: 0.015991 MiB name: RG_ring_3_70627 00:05:42.095 end memzones------- 00:05:42.095 03:05:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:42.095 heap id: 0 total size: 860.000000 MiB number of busy elements: 308 number of free elements: 16 00:05:42.095 list of free elements. size: 13.936340 MiB 00:05:42.095 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:42.095 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:42.095 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:42.095 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:42.095 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:42.096 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:42.096 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:42.096 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:42.096 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:42.096 element at address: 0x20001d800000 with size: 0.567505 MiB 00:05:42.096 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:42.096 element at address: 0x200003e00000 with size: 0.488281 MiB 00:05:42.096 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:42.096 element at address: 0x200007000000 with size: 0.480286 MiB 00:05:42.096 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:42.096 element at address: 0x200003a00000 with size: 0.353210 MiB 00:05:42.096 list of standard malloc elements. size: 199.266968 MiB 00:05:42.096 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:42.096 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:42.096 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:42.096 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:42.096 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:42.096 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:42.096 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:42.096 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:42.096 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:42.096 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707af40 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:42.096 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:42.096 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:42.096 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891480 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:42.097 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:42.098 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:42.098 list of memzone associated elements. size: 646.796692 MiB 00:05:42.098 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:42.098 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:42.098 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:42.098 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:42.098 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:42.098 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70627_0 00:05:42.098 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:42.098 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70627_0 00:05:42.098 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:42.098 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70627_0 00:05:42.098 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:42.098 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70627_0 00:05:42.098 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:42.098 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:42.098 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:42.098 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:42.098 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:42.098 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70627 00:05:42.098 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:42.098 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70627 00:05:42.098 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:42.098 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70627 00:05:42.098 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:42.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:42.098 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:42.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:42.098 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:42.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:42.098 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:42.098 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:42.098 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:42.098 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70627 00:05:42.098 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:42.098 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70627 00:05:42.098 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:42.098 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70627 00:05:42.098 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:42.098 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70627 00:05:42.098 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:42.098 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70627 00:05:42.098 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:42.098 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70627 00:05:42.098 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:42.098 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:42.098 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:42.098 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:42.098 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:42.098 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:42.098 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:05:42.098 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70627 00:05:42.098 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:42.098 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:42.098 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:42.098 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:42.098 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:05:42.098 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70627 00:05:42.098 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:42.098 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:42.098 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:42.098 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70627 00:05:42.098 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:42.098 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70627 00:05:42.098 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:05:42.098 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70627 00:05:42.098 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:42.098 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:42.098 03:05:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:42.098 03:05:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70627 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70627 ']' 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70627 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70627 00:05:42.098 killing process with pid 70627 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70627' 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70627 00:05:42.098 03:05:45 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70627 00:05:42.669 00:05:42.669 real 0m1.688s 00:05:42.669 user 0m1.625s 00:05:42.669 sys 0m0.514s 00:05:42.669 03:05:46 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.669 ************************************ 00:05:42.669 END TEST dpdk_mem_utility 00:05:42.669 ************************************ 00:05:42.669 03:05:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.669 03:05:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.669 03:05:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.669 03:05:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.669 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.669 ************************************ 00:05:42.669 START TEST event 00:05:42.669 ************************************ 00:05:42.669 03:05:46 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:42.669 * Looking for test storage... 00:05:42.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.669 03:05:46 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.669 03:05:46 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.669 03:05:46 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.930 03:05:46 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.930 03:05:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.930 03:05:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.930 03:05:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.930 03:05:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.930 03:05:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.930 03:05:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.930 03:05:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.930 03:05:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.930 03:05:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.930 03:05:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.930 03:05:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.931 03:05:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.931 03:05:46 event -- scripts/common.sh@345 -- # : 1 00:05:42.931 03:05:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.931 03:05:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.931 03:05:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.931 03:05:46 event -- scripts/common.sh@353 -- # local d=1 00:05:42.931 03:05:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.931 03:05:46 event -- scripts/common.sh@355 -- # echo 1 00:05:42.931 03:05:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.931 03:05:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.931 03:05:46 event -- scripts/common.sh@353 -- # local d=2 00:05:42.931 03:05:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.931 03:05:46 event -- scripts/common.sh@355 -- # echo 2 00:05:42.931 03:05:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.931 03:05:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.931 03:05:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.931 03:05:46 event -- scripts/common.sh@368 -- # return 0 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.931 --rc genhtml_branch_coverage=1 00:05:42.931 --rc genhtml_function_coverage=1 00:05:42.931 --rc genhtml_legend=1 00:05:42.931 --rc geninfo_all_blocks=1 00:05:42.931 --rc geninfo_unexecuted_blocks=1 00:05:42.931 00:05:42.931 ' 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.931 --rc genhtml_branch_coverage=1 00:05:42.931 --rc genhtml_function_coverage=1 00:05:42.931 --rc genhtml_legend=1 00:05:42.931 --rc geninfo_all_blocks=1 00:05:42.931 --rc geninfo_unexecuted_blocks=1 00:05:42.931 00:05:42.931 ' 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.931 --rc genhtml_branch_coverage=1 00:05:42.931 --rc genhtml_function_coverage=1 00:05:42.931 --rc genhtml_legend=1 00:05:42.931 --rc geninfo_all_blocks=1 00:05:42.931 --rc geninfo_unexecuted_blocks=1 00:05:42.931 00:05:42.931 ' 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.931 --rc genhtml_branch_coverage=1 00:05:42.931 --rc genhtml_function_coverage=1 00:05:42.931 --rc genhtml_legend=1 00:05:42.931 --rc geninfo_all_blocks=1 00:05:42.931 --rc geninfo_unexecuted_blocks=1 00:05:42.931 00:05:42.931 ' 00:05:42.931 03:05:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:42.931 03:05:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.931 03:05:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:42.931 03:05:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.931 03:05:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.931 ************************************ 00:05:42.931 START TEST event_perf 00:05:42.931 ************************************ 00:05:42.931 03:05:46 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.931 Running I/O for 1 seconds...[2024-11-18 03:05:46.359256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:42.931 [2024-11-18 03:05:46.359634] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70708 ] 00:05:43.192 [2024-11-18 03:05:46.511588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.192 [2024-11-18 03:05:46.575704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.192 [2024-11-18 03:05:46.576173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.192 [2024-11-18 03:05:46.576302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.192 Running I/O for 1 seconds...[2024-11-18 03:05:46.576410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.135 00:05:44.135 lcore 0: 130924 00:05:44.135 lcore 1: 130924 00:05:44.135 lcore 2: 130924 00:05:44.135 lcore 3: 130924 00:05:44.135 done. 00:05:44.135 ************************************ 00:05:44.135 END TEST event_perf 00:05:44.135 ************************************ 00:05:44.135 00:05:44.135 real 0m1.348s 00:05:44.135 user 0m4.116s 00:05:44.135 sys 0m0.105s 00:05:44.135 03:05:47 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.135 03:05:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.396 03:05:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.396 03:05:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:44.396 03:05:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.396 03:05:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.396 ************************************ 00:05:44.396 START TEST event_reactor 00:05:44.396 ************************************ 00:05:44.396 03:05:47 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:44.396 [2024-11-18 03:05:47.777159] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:44.396 [2024-11-18 03:05:47.777362] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70747 ] 00:05:44.396 [2024-11-18 03:05:47.930134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.658 [2024-11-18 03:05:47.988960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.604 test_start 00:05:45.604 oneshot 00:05:45.604 tick 100 00:05:45.604 tick 100 00:05:45.604 tick 250 00:05:45.604 tick 100 00:05:45.604 tick 100 00:05:45.604 tick 100 00:05:45.604 tick 250 00:05:45.604 tick 500 00:05:45.604 tick 100 00:05:45.604 tick 100 00:05:45.604 tick 250 00:05:45.604 tick 100 00:05:45.604 tick 100 00:05:45.604 test_end 00:05:45.604 ************************************ 00:05:45.604 END TEST event_reactor 00:05:45.604 ************************************ 00:05:45.604 00:05:45.604 real 0m1.335s 00:05:45.604 user 0m1.133s 00:05:45.604 sys 0m0.089s 00:05:45.604 03:05:49 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.604 03:05:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.604 03:05:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.604 03:05:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:45.604 03:05:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.604 03:05:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.604 ************************************ 00:05:45.604 START TEST event_reactor_perf 00:05:45.604 ************************************ 00:05:45.604 03:05:49 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.865 [2024-11-18 03:05:49.184393] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:45.865 [2024-11-18 03:05:49.184791] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:05:45.865 [2024-11-18 03:05:49.340493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.865 [2024-11-18 03:05:49.402184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.255 test_start 00:05:47.255 test_end 00:05:47.255 Performance: 305182 events per second 00:05:47.255 ************************************ 00:05:47.255 END TEST event_reactor_perf 00:05:47.255 ************************************ 00:05:47.255 00:05:47.255 real 0m1.340s 00:05:47.255 user 0m1.128s 00:05:47.255 sys 0m0.100s 00:05:47.255 03:05:50 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.255 03:05:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.255 03:05:50 event -- event/event.sh@49 -- # uname -s 00:05:47.255 03:05:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:47.255 03:05:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.255 03:05:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.255 03:05:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.255 03:05:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.255 ************************************ 00:05:47.255 START TEST event_scheduler 00:05:47.255 ************************************ 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:47.255 * Looking for test storage... 00:05:47.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:47.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.255 03:05:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.255 --rc genhtml_branch_coverage=1 00:05:47.255 --rc genhtml_function_coverage=1 00:05:47.255 --rc genhtml_legend=1 00:05:47.255 --rc geninfo_all_blocks=1 00:05:47.255 --rc geninfo_unexecuted_blocks=1 00:05:47.255 00:05:47.255 ' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.255 --rc genhtml_branch_coverage=1 00:05:47.255 --rc genhtml_function_coverage=1 00:05:47.255 --rc genhtml_legend=1 00:05:47.255 --rc geninfo_all_blocks=1 00:05:47.255 --rc geninfo_unexecuted_blocks=1 00:05:47.255 00:05:47.255 ' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.255 --rc genhtml_branch_coverage=1 00:05:47.255 --rc genhtml_function_coverage=1 00:05:47.255 --rc genhtml_legend=1 00:05:47.255 --rc geninfo_all_blocks=1 00:05:47.255 --rc geninfo_unexecuted_blocks=1 00:05:47.255 00:05:47.255 ' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.255 --rc genhtml_branch_coverage=1 00:05:47.255 --rc genhtml_function_coverage=1 00:05:47.255 --rc genhtml_legend=1 00:05:47.255 --rc geninfo_all_blocks=1 00:05:47.255 --rc geninfo_unexecuted_blocks=1 00:05:47.255 00:05:47.255 ' 00:05:47.255 03:05:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:47.255 03:05:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70854 00:05:47.255 03:05:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.255 03:05:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70854 00:05:47.255 03:05:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70854 ']' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.255 03:05:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.255 [2024-11-18 03:05:50.794235] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.255 [2024-11-18 03:05:50.794419] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70854 ] 00:05:47.518 [2024-11-18 03:05:50.948637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.518 [2024-11-18 03:05:51.005720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.518 [2024-11-18 03:05:51.006129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.518 [2024-11-18 03:05:51.006354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.518 [2024-11-18 03:05:51.006475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:48.465 03:05:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.465 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.465 POWER: Cannot set governor of lcore 0 to performance 00:05:48.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.465 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.465 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.465 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:48.465 POWER: Unable to set Power Management Environment for lcore 0 00:05:48.465 [2024-11-18 03:05:51.688157] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:48.465 [2024-11-18 03:05:51.688227] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:48.465 [2024-11-18 03:05:51.688239] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:48.465 [2024-11-18 03:05:51.688276] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:48.465 [2024-11-18 03:05:51.688300] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:48.465 [2024-11-18 03:05:51.688309] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.465 03:05:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.465 [2024-11-18 03:05:51.775933] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.465 03:05:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.465 03:05:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.465 ************************************ 00:05:48.465 START TEST scheduler_create_thread 00:05:48.465 ************************************ 00:05:48.465 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 2 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 3 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 4 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 5 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 6 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 7 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 8 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 9 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.466 10 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.466 03:05:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.848 03:05:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.848 03:05:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.848 03:05:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.848 03:05:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.848 03:05:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.790 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.790 03:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:50.790 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.790 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.377 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.377 03:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.377 03:05:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.377 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.377 03:05:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.315 ************************************ 00:05:52.315 END TEST scheduler_create_thread 00:05:52.315 ************************************ 00:05:52.315 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.315 00:05:52.315 real 0m3.887s 00:05:52.315 user 0m0.018s 00:05:52.315 sys 0m0.004s 00:05:52.315 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.315 03:05:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.315 03:05:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.315 03:05:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70854 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70854 ']' 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70854 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70854 00:05:52.315 killing process with pid 70854 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70854' 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70854 00:05:52.315 03:05:55 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70854 00:05:52.572 [2024-11-18 03:05:56.063249] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.831 00:05:52.831 real 0m5.722s 00:05:52.831 user 0m12.039s 00:05:52.831 sys 0m0.404s 00:05:52.831 03:05:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.831 03:05:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.831 ************************************ 00:05:52.831 END TEST event_scheduler 00:05:52.831 ************************************ 00:05:52.831 03:05:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.831 03:05:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.831 03:05:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.831 03:05:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.831 03:05:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.831 ************************************ 00:05:52.831 START TEST app_repeat 00:05:52.832 ************************************ 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70966 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.832 Process app_repeat pid: 70966 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70966' 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.832 spdk_app_start Round 0 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.832 03:05:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70966 /var/tmp/spdk-nbd.sock 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70966 ']' 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.832 03:05:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.832 [2024-11-18 03:05:56.368009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:52.832 [2024-11-18 03:05:56.368125] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70966 ] 00:05:53.090 [2024-11-18 03:05:56.515523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.090 [2024-11-18 03:05:56.549546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.090 [2024-11-18 03:05:56.549596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.034 03:05:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.034 03:05:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:54.034 03:05:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.034 Malloc0 00:05:54.034 03:05:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.295 Malloc1 00:05:54.295 03:05:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.295 03:05:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.581 /dev/nbd0 00:05:54.581 03:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.581 03:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.581 1+0 records in 00:05:54.581 1+0 records out 00:05:54.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320459 s, 12.8 MB/s 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.581 03:05:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.581 03:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.581 03:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.582 03:05:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.582 /dev/nbd1 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.876 1+0 records in 00:05:54.876 1+0 records out 00:05:54.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580729 s, 7.1 MB/s 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:54.876 03:05:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.876 { 00:05:54.876 "nbd_device": "/dev/nbd0", 00:05:54.876 "bdev_name": "Malloc0" 00:05:54.876 }, 00:05:54.876 { 00:05:54.876 "nbd_device": "/dev/nbd1", 00:05:54.876 "bdev_name": "Malloc1" 00:05:54.876 } 00:05:54.876 ]' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.876 { 00:05:54.876 "nbd_device": "/dev/nbd0", 00:05:54.876 "bdev_name": "Malloc0" 00:05:54.876 }, 00:05:54.876 { 00:05:54.876 "nbd_device": "/dev/nbd1", 00:05:54.876 "bdev_name": "Malloc1" 00:05:54.876 } 00:05:54.876 ]' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.876 /dev/nbd1' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.876 /dev/nbd1' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.876 256+0 records in 00:05:54.876 256+0 records out 00:05:54.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582503 s, 180 MB/s 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.876 03:05:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.137 256+0 records in 00:05:55.138 256+0 records out 00:05:55.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019277 s, 54.4 MB/s 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.138 256+0 records in 00:05:55.138 256+0 records out 00:05:55.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191901 s, 54.6 MB/s 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.138 03:05:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.399 03:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.659 03:05:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.659 03:05:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.919 03:05:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.919 [2024-11-18 03:05:59.458949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.919 [2024-11-18 03:05:59.485635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.919 [2024-11-18 03:05:59.485737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.180 [2024-11-18 03:05:59.513930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.180 [2024-11-18 03:05:59.513980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.480 spdk_app_start Round 1 00:05:59.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.480 03:06:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.480 03:06:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.480 03:06:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70966 /var/tmp/spdk-nbd.sock 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70966 ']' 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.480 03:06:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:59.480 03:06:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.480 Malloc0 00:05:59.480 03:06:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.480 Malloc1 00:05:59.480 03:06:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.480 03:06:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.741 /dev/nbd0 00:05:59.741 03:06:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.741 03:06:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.741 1+0 records in 00:05:59.741 1+0 records out 00:05:59.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289385 s, 14.2 MB/s 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.741 03:06:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.741 03:06:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.741 03:06:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.741 03:06:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.003 /dev/nbd1 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.003 1+0 records in 00:06:00.003 1+0 records out 00:06:00.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138008 s, 29.7 MB/s 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.003 03:06:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.003 03:06:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.264 { 00:06:00.264 "nbd_device": "/dev/nbd0", 00:06:00.264 "bdev_name": "Malloc0" 00:06:00.264 }, 00:06:00.264 { 00:06:00.264 "nbd_device": "/dev/nbd1", 00:06:00.264 "bdev_name": "Malloc1" 00:06:00.264 } 00:06:00.264 ]' 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.264 { 00:06:00.264 "nbd_device": "/dev/nbd0", 00:06:00.264 "bdev_name": "Malloc0" 00:06:00.264 }, 00:06:00.264 { 00:06:00.264 "nbd_device": "/dev/nbd1", 00:06:00.264 "bdev_name": "Malloc1" 00:06:00.264 } 00:06:00.264 ]' 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.264 /dev/nbd1' 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.264 /dev/nbd1' 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.264 03:06:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.265 256+0 records in 00:06:00.265 256+0 records out 00:06:00.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675443 s, 155 MB/s 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.265 256+0 records in 00:06:00.265 256+0 records out 00:06:00.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166429 s, 63.0 MB/s 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.265 256+0 records in 00:06:00.265 256+0 records out 00:06:00.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166713 s, 62.9 MB/s 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.265 03:06:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.525 03:06:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.786 03:06:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.046 03:06:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.046 03:06:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.046 03:06:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.308 [2024-11-18 03:06:04.673579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.308 [2024-11-18 03:06:04.700851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.308 [2024-11-18 03:06:04.700964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.308 [2024-11-18 03:06:04.729011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.308 [2024-11-18 03:06:04.729049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.612 spdk_app_start Round 2 00:06:04.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.612 03:06:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.612 03:06:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.612 03:06:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70966 /var/tmp/spdk-nbd.sock 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70966 ']' 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.612 03:06:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.612 03:06:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.612 Malloc0 00:06:04.612 03:06:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.873 Malloc1 00:06:04.873 03:06:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.873 /dev/nbd0 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.873 03:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.873 1+0 records in 00:06:04.873 1+0 records out 00:06:04.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239965 s, 17.1 MB/s 00:06:04.873 03:06:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.134 /dev/nbd1 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.134 1+0 records in 00:06:05.134 1+0 records out 00:06:05.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163547 s, 25.0 MB/s 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.134 03:06:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.134 03:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.395 { 00:06:05.395 "nbd_device": "/dev/nbd0", 00:06:05.395 "bdev_name": "Malloc0" 00:06:05.395 }, 00:06:05.395 { 00:06:05.395 "nbd_device": "/dev/nbd1", 00:06:05.395 "bdev_name": "Malloc1" 00:06:05.395 } 00:06:05.395 ]' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.395 { 00:06:05.395 "nbd_device": "/dev/nbd0", 00:06:05.395 "bdev_name": "Malloc0" 00:06:05.395 }, 00:06:05.395 { 00:06:05.395 "nbd_device": "/dev/nbd1", 00:06:05.395 "bdev_name": "Malloc1" 00:06:05.395 } 00:06:05.395 ]' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.395 /dev/nbd1' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.395 /dev/nbd1' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.395 256+0 records in 00:06:05.395 256+0 records out 00:06:05.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00687888 s, 152 MB/s 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.395 256+0 records in 00:06:05.395 256+0 records out 00:06:05.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141207 s, 74.3 MB/s 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.395 03:06:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.653 256+0 records in 00:06:05.653 256+0 records out 00:06:05.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176052 s, 59.6 MB/s 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.653 03:06:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.653 03:06:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.950 03:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.208 03:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.208 03:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.209 03:06:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.209 03:06:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.466 03:06:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.466 [2024-11-18 03:06:09.939339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.466 [2024-11-18 03:06:09.966142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.466 [2024-11-18 03:06:09.966145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.466 [2024-11-18 03:06:09.994392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.466 [2024-11-18 03:06:09.994431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.760 03:06:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70966 /var/tmp/spdk-nbd.sock 00:06:09.760 03:06:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70966 ']' 00:06:09.760 03:06:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.760 03:06:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.760 03:06:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.760 03:06:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.760 03:06:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:09.760 03:06:13 event.app_repeat -- event/event.sh@39 -- # killprocess 70966 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70966 ']' 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70966 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70966 00:06:09.760 killing process with pid 70966 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70966' 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70966 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70966 00:06:09.760 spdk_app_start is called in Round 0. 00:06:09.760 Shutdown signal received, stop current app iteration 00:06:09.760 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:09.760 spdk_app_start is called in Round 1. 00:06:09.760 Shutdown signal received, stop current app iteration 00:06:09.760 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:09.760 spdk_app_start is called in Round 2. 00:06:09.760 Shutdown signal received, stop current app iteration 00:06:09.760 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:09.760 spdk_app_start is called in Round 3. 00:06:09.760 Shutdown signal received, stop current app iteration 00:06:09.760 03:06:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.760 03:06:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.760 00:06:09.760 real 0m16.874s 00:06:09.760 user 0m37.732s 00:06:09.760 sys 0m2.034s 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.760 03:06:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.760 ************************************ 00:06:09.760 END TEST app_repeat 00:06:09.760 ************************************ 00:06:09.760 03:06:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.760 03:06:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:09.760 03:06:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.760 03:06:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.760 03:06:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.760 ************************************ 00:06:09.760 START TEST cpu_locks 00:06:09.760 ************************************ 00:06:09.760 03:06:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:09.760 * Looking for test storage... 00:06:10.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.023 03:06:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.023 --rc genhtml_branch_coverage=1 00:06:10.023 --rc genhtml_function_coverage=1 00:06:10.023 --rc genhtml_legend=1 00:06:10.023 --rc geninfo_all_blocks=1 00:06:10.023 --rc geninfo_unexecuted_blocks=1 00:06:10.023 00:06:10.023 ' 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.023 --rc genhtml_branch_coverage=1 00:06:10.023 --rc genhtml_function_coverage=1 00:06:10.023 --rc genhtml_legend=1 00:06:10.023 --rc geninfo_all_blocks=1 00:06:10.023 --rc geninfo_unexecuted_blocks=1 00:06:10.023 00:06:10.023 ' 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.023 --rc genhtml_branch_coverage=1 00:06:10.023 --rc genhtml_function_coverage=1 00:06:10.023 --rc genhtml_legend=1 00:06:10.023 --rc geninfo_all_blocks=1 00:06:10.023 --rc geninfo_unexecuted_blocks=1 00:06:10.023 00:06:10.023 ' 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.023 --rc genhtml_branch_coverage=1 00:06:10.023 --rc genhtml_function_coverage=1 00:06:10.023 --rc genhtml_legend=1 00:06:10.023 --rc geninfo_all_blocks=1 00:06:10.023 --rc geninfo_unexecuted_blocks=1 00:06:10.023 00:06:10.023 ' 00:06:10.023 03:06:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.023 03:06:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.023 03:06:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.023 03:06:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.023 03:06:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.023 ************************************ 00:06:10.023 START TEST default_locks 00:06:10.023 ************************************ 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71385 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71385 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71385 ']' 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.023 03:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.023 [2024-11-18 03:06:13.494855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.023 [2024-11-18 03:06:13.495117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71385 ] 00:06:10.284 [2024-11-18 03:06:13.640486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.284 [2024-11-18 03:06:13.671135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.856 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.856 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:10.856 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71385 00:06:10.856 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71385 00:06:10.856 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71385 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71385 ']' 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71385 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71385 00:06:11.117 killing process with pid 71385 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71385' 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71385 00:06:11.117 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71385 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71385 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71385 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:11.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.380 ERROR: process (pid: 71385) is no longer running 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71385 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71385 ']' 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71385) - No such process 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.380 00:06:11.380 real 0m1.391s 00:06:11.380 user 0m1.457s 00:06:11.380 sys 0m0.384s 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.380 ************************************ 00:06:11.380 03:06:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 END TEST default_locks 00:06:11.380 ************************************ 00:06:11.380 03:06:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.380 03:06:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.380 03:06:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.380 03:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 ************************************ 00:06:11.380 START TEST default_locks_via_rpc 00:06:11.380 ************************************ 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71433 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71433 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71433 ']' 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.380 03:06:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 [2024-11-18 03:06:14.942891] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.380 [2024-11-18 03:06:14.943010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:06:11.642 [2024-11-18 03:06:15.093414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.642 [2024-11-18 03:06:15.131646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.214 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.214 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.214 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.214 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.214 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71433 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71433 00:06:12.474 03:06:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71433 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71433 ']' 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71433 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71433 00:06:12.735 killing process with pid 71433 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71433' 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71433 00:06:12.735 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71433 00:06:12.997 ************************************ 00:06:12.997 END TEST default_locks_via_rpc 00:06:12.997 ************************************ 00:06:12.997 00:06:12.997 real 0m1.560s 00:06:12.997 user 0m1.580s 00:06:12.997 sys 0m0.477s 00:06:12.997 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.997 03:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.997 03:06:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.997 03:06:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.997 03:06:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.997 03:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.997 ************************************ 00:06:12.997 START TEST non_locking_app_on_locked_coremask 00:06:12.997 ************************************ 00:06:12.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71478 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71478 /var/tmp/spdk.sock 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71478 ']' 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.997 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.998 03:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.258 [2024-11-18 03:06:16.576077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.258 [2024-11-18 03:06:16.576248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71478 ] 00:06:13.258 [2024-11-18 03:06:16.724085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.258 [2024-11-18 03:06:16.774075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71490 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71490 /var/tmp/spdk2.sock 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71490 ']' 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.204 03:06:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.204 [2024-11-18 03:06:17.499905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.204 [2024-11-18 03:06:17.500345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71490 ] 00:06:14.204 [2024-11-18 03:06:17.659941] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.204 [2024-11-18 03:06:17.660020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.204 [2024-11-18 03:06:17.760296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.143 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.143 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.143 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71478 00:06:15.143 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.144 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71478 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71478 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71478 ']' 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71478 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71478 00:06:15.404 killing process with pid 71478 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71478' 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71478 00:06:15.404 03:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71478 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71490 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71490 ']' 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71490 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71490 00:06:16.344 killing process with pid 71490 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71490' 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71490 00:06:16.344 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71490 00:06:16.605 ************************************ 00:06:16.605 END TEST non_locking_app_on_locked_coremask 00:06:16.605 ************************************ 00:06:16.605 00:06:16.605 real 0m3.449s 00:06:16.605 user 0m3.582s 00:06:16.605 sys 0m1.058s 00:06:16.605 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.605 03:06:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.605 03:06:19 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.605 03:06:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.605 03:06:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.605 03:06:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.605 ************************************ 00:06:16.605 START TEST locking_app_on_unlocked_coremask 00:06:16.605 ************************************ 00:06:16.605 03:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:16.605 03:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71559 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71559 /var/tmp/spdk.sock 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71559 ']' 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.605 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.605 [2024-11-18 03:06:20.089569] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:16.606 [2024-11-18 03:06:20.089726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71559 ] 00:06:16.867 [2024-11-18 03:06:20.240345] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.867 [2024-11-18 03:06:20.240418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.867 [2024-11-18 03:06:20.293489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71575 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71575 /var/tmp/spdk2.sock 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71575 ']' 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.440 03:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.702 [2024-11-18 03:06:21.028792] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.702 [2024-11-18 03:06:21.029278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71575 ] 00:06:17.702 [2024-11-18 03:06:21.187120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.963 [2024-11-18 03:06:21.292026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.534 03:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.534 03:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.534 03:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71575 00:06:18.534 03:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.534 03:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71575 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71559 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71559 ']' 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71559 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71559 00:06:18.793 killing process with pid 71559 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71559' 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71559 00:06:18.793 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71559 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71575 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71575 ']' 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71575 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71575 00:06:19.360 killing process with pid 71575 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71575' 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71575 00:06:19.360 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71575 00:06:19.619 00:06:19.619 real 0m2.984s 00:06:19.619 user 0m3.139s 00:06:19.619 sys 0m1.027s 00:06:19.619 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.619 ************************************ 00:06:19.619 END TEST locking_app_on_unlocked_coremask 00:06:19.619 ************************************ 00:06:19.619 03:06:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.619 03:06:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.619 03:06:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.619 03:06:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.619 03:06:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.619 ************************************ 00:06:19.619 START TEST locking_app_on_locked_coremask 00:06:19.619 ************************************ 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71633 00:06:19.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71633 /var/tmp/spdk.sock 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71633 ']' 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.619 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.619 [2024-11-18 03:06:23.111684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.619 [2024-11-18 03:06:23.111803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71633 ] 00:06:19.880 [2024-11-18 03:06:23.260727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.880 [2024-11-18 03:06:23.301517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71649 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71649 /var/tmp/spdk2.sock 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71649 /var/tmp/spdk2.sock 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:20.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.451 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71649 /var/tmp/spdk2.sock 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71649 ']' 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.452 03:06:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 [2024-11-18 03:06:24.023763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.452 [2024-11-18 03:06:24.024057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71649 ] 00:06:20.711 [2024-11-18 03:06:24.170211] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71633 has claimed it. 00:06:20.711 [2024-11-18 03:06:24.170259] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.283 ERROR: process (pid: 71649) is no longer running 00:06:21.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71649) - No such process 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71633 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71633 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71633 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71633 ']' 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71633 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.284 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71633 00:06:21.545 killing process with pid 71633 00:06:21.545 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.545 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.545 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71633' 00:06:21.545 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71633 00:06:21.545 03:06:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71633 00:06:21.545 ************************************ 00:06:21.545 END TEST locking_app_on_locked_coremask 00:06:21.545 ************************************ 00:06:21.545 00:06:21.545 real 0m2.058s 00:06:21.545 user 0m2.295s 00:06:21.545 sys 0m0.533s 00:06:21.545 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.545 03:06:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.819 03:06:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.819 03:06:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.819 03:06:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.819 03:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.819 ************************************ 00:06:21.819 START TEST locking_overlapped_coremask 00:06:21.819 ************************************ 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71691 00:06:21.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71691 /var/tmp/spdk.sock 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71691 ']' 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.819 03:06:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.819 [2024-11-18 03:06:25.223765] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.819 [2024-11-18 03:06:25.223889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71691 ] 00:06:21.819 [2024-11-18 03:06:25.370133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.082 [2024-11-18 03:06:25.413879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.082 [2024-11-18 03:06:25.414198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.082 [2024-11-18 03:06:25.414224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71709 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71709 /var/tmp/spdk2.sock 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71709 /var/tmp/spdk2.sock 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:22.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71709 /var/tmp/spdk2.sock 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71709 ']' 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.653 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.653 [2024-11-18 03:06:26.127118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:22.653 [2024-11-18 03:06:26.127231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:06:22.914 [2024-11-18 03:06:26.280164] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71691 has claimed it. 00:06:22.914 [2024-11-18 03:06:26.280225] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.488 ERROR: process (pid: 71709) is no longer running 00:06:23.488 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71709) - No such process 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71691 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71691 ']' 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71691 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71691 00:06:23.488 killing process with pid 71691 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71691' 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71691 00:06:23.488 03:06:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71691 00:06:23.488 ************************************ 00:06:23.488 END TEST locking_overlapped_coremask 00:06:23.488 00:06:23.488 real 0m1.869s 00:06:23.488 user 0m5.101s 00:06:23.488 sys 0m0.418s 00:06:23.488 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.488 03:06:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 ************************************ 00:06:23.749 03:06:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.750 03:06:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.750 03:06:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.750 03:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.750 ************************************ 00:06:23.750 START TEST locking_overlapped_coremask_via_rpc 00:06:23.750 ************************************ 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:23.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71751 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71751 /var/tmp/spdk.sock 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71751 ']' 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.750 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.750 [2024-11-18 03:06:27.152128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.750 [2024-11-18 03:06:27.152247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71751 ] 00:06:23.750 [2024-11-18 03:06:27.297726] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.750 [2024-11-18 03:06:27.297771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.011 [2024-11-18 03:06:27.333743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.011 [2024-11-18 03:06:27.334114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.011 [2024-11-18 03:06:27.334179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71769 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71769 /var/tmp/spdk2.sock 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71769 ']' 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.583 03:06:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.583 [2024-11-18 03:06:28.047417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:24.583 [2024-11-18 03:06:28.047981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71769 ] 00:06:24.844 [2024-11-18 03:06:28.201840] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.844 [2024-11-18 03:06:28.201879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.844 [2024-11-18 03:06:28.266642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.844 [2024-11-18 03:06:28.266786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.844 [2024-11-18 03:06:28.266874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.416 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.416 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.417 [2024-11-18 03:06:28.903500] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71751 has claimed it. 00:06:25.417 request: 00:06:25.417 { 00:06:25.417 "method": "framework_enable_cpumask_locks", 00:06:25.417 "req_id": 1 00:06:25.417 } 00:06:25.417 Got JSON-RPC error response 00:06:25.417 response: 00:06:25.417 { 00:06:25.417 "code": -32603, 00:06:25.417 "message": "Failed to claim CPU core: 2" 00:06:25.417 } 00:06:25.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71751 /var/tmp/spdk.sock 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71751 ']' 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.417 03:06:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71769 /var/tmp/spdk2.sock 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71769 ']' 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.677 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 ************************************ 00:06:25.938 END TEST locking_overlapped_coremask_via_rpc 00:06:25.938 ************************************ 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.938 00:06:25.938 real 0m2.242s 00:06:25.938 user 0m1.055s 00:06:25.938 sys 0m0.122s 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.938 03:06:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.938 03:06:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.938 03:06:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71751 ]] 00:06:25.938 03:06:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71751 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71751 ']' 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71751 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71751 00:06:25.938 killing process with pid 71751 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71751' 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71751 00:06:25.938 03:06:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71751 00:06:26.198 03:06:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71769 ]] 00:06:26.198 03:06:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71769 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71769 ']' 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71769 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71769 00:06:26.198 killing process with pid 71769 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71769' 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71769 00:06:26.198 03:06:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71769 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.459 Process with pid 71751 is not found 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71751 ]] 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71751 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71751 ']' 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71751 00:06:26.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71751) - No such process 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71751 is not found' 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71769 ]] 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71769 00:06:26.459 Process with pid 71769 is not found 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71769 ']' 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71769 00:06:26.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71769) - No such process 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71769 is not found' 00:06:26.459 03:06:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.459 ************************************ 00:06:26.459 END TEST cpu_locks 00:06:26.459 ************************************ 00:06:26.459 00:06:26.459 real 0m16.632s 00:06:26.459 user 0m28.274s 00:06:26.459 sys 0m4.745s 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.459 03:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.459 ************************************ 00:06:26.459 END TEST event 00:06:26.459 ************************************ 00:06:26.459 00:06:26.459 real 0m43.774s 00:06:26.459 user 1m24.594s 00:06:26.459 sys 0m7.723s 00:06:26.459 03:06:29 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.459 03:06:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.459 03:06:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.459 03:06:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.459 03:06:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.459 03:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:26.459 ************************************ 00:06:26.459 START TEST thread 00:06:26.459 ************************************ 00:06:26.459 03:06:29 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.721 * Looking for test storage... 00:06:26.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.721 03:06:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.721 03:06:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.721 03:06:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.721 03:06:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.721 03:06:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.721 03:06:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.721 03:06:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.721 03:06:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.721 03:06:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.721 03:06:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.721 03:06:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.721 03:06:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.721 03:06:30 thread -- scripts/common.sh@345 -- # : 1 00:06:26.721 03:06:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.721 03:06:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.721 03:06:30 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.721 03:06:30 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.721 03:06:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.721 03:06:30 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.721 03:06:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.721 03:06:30 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.721 03:06:30 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.721 03:06:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.721 03:06:30 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.721 03:06:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.721 03:06:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.721 03:06:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.721 03:06:30 thread -- scripts/common.sh@368 -- # return 0 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.721 --rc genhtml_branch_coverage=1 00:06:26.721 --rc genhtml_function_coverage=1 00:06:26.721 --rc genhtml_legend=1 00:06:26.721 --rc geninfo_all_blocks=1 00:06:26.721 --rc geninfo_unexecuted_blocks=1 00:06:26.721 00:06:26.721 ' 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.721 --rc genhtml_branch_coverage=1 00:06:26.721 --rc genhtml_function_coverage=1 00:06:26.721 --rc genhtml_legend=1 00:06:26.721 --rc geninfo_all_blocks=1 00:06:26.721 --rc geninfo_unexecuted_blocks=1 00:06:26.721 00:06:26.721 ' 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.721 --rc genhtml_branch_coverage=1 00:06:26.721 --rc genhtml_function_coverage=1 00:06:26.721 --rc genhtml_legend=1 00:06:26.721 --rc geninfo_all_blocks=1 00:06:26.721 --rc geninfo_unexecuted_blocks=1 00:06:26.721 00:06:26.721 ' 00:06:26.721 03:06:30 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.721 --rc genhtml_branch_coverage=1 00:06:26.722 --rc genhtml_function_coverage=1 00:06:26.722 --rc genhtml_legend=1 00:06:26.722 --rc geninfo_all_blocks=1 00:06:26.722 --rc geninfo_unexecuted_blocks=1 00:06:26.722 00:06:26.722 ' 00:06:26.722 03:06:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.722 03:06:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:26.722 03:06:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.722 03:06:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.722 ************************************ 00:06:26.722 START TEST thread_poller_perf 00:06:26.722 ************************************ 00:06:26.722 03:06:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.722 [2024-11-18 03:06:30.171730] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.722 [2024-11-18 03:06:30.171836] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71897 ] 00:06:26.982 [2024-11-18 03:06:30.319497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.982 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.982 [2024-11-18 03:06:30.354230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.916 [2024-11-18T03:06:31.493Z] ====================================== 00:06:27.916 [2024-11-18T03:06:31.493Z] busy:2612018104 (cyc) 00:06:27.916 [2024-11-18T03:06:31.493Z] total_run_count: 306000 00:06:27.916 [2024-11-18T03:06:31.493Z] tsc_hz: 2600000000 (cyc) 00:06:27.916 [2024-11-18T03:06:31.493Z] ====================================== 00:06:27.916 [2024-11-18T03:06:31.493Z] poller_cost: 8536 (cyc), 3283 (nsec) 00:06:27.916 00:06:27.916 real 0m1.274s 00:06:27.916 user 0m1.101s 00:06:27.916 sys 0m0.066s 00:06:27.916 03:06:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.916 03:06:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.916 ************************************ 00:06:27.916 END TEST thread_poller_perf 00:06:27.916 ************************************ 00:06:27.916 03:06:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.916 03:06:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:27.916 03:06:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.916 03:06:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.916 ************************************ 00:06:27.916 START TEST thread_poller_perf 00:06:27.916 ************************************ 00:06:27.916 03:06:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.174 [2024-11-18 03:06:31.500728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:28.174 [2024-11-18 03:06:31.500839] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71929 ] 00:06:28.174 [2024-11-18 03:06:31.643986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.174 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.174 [2024-11-18 03:06:31.679909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.549 [2024-11-18T03:06:33.126Z] ====================================== 00:06:29.549 [2024-11-18T03:06:33.126Z] busy:2603273732 (cyc) 00:06:29.549 [2024-11-18T03:06:33.126Z] total_run_count: 3971000 00:06:29.549 [2024-11-18T03:06:33.126Z] tsc_hz: 2600000000 (cyc) 00:06:29.549 [2024-11-18T03:06:33.126Z] ====================================== 00:06:29.549 [2024-11-18T03:06:33.126Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:29.549 00:06:29.549 real 0m1.269s 00:06:29.549 user 0m1.100s 00:06:29.549 sys 0m0.063s 00:06:29.549 03:06:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.549 ************************************ 00:06:29.549 END TEST thread_poller_perf 00:06:29.549 03:06:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.549 ************************************ 00:06:29.549 03:06:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:29.549 00:06:29.549 real 0m2.804s 00:06:29.549 user 0m2.320s 00:06:29.549 sys 0m0.249s 00:06:29.549 03:06:32 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.549 ************************************ 00:06:29.549 END TEST thread 00:06:29.549 ************************************ 00:06:29.549 03:06:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.549 03:06:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:29.549 03:06:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.549 03:06:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.549 03:06:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.549 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.549 ************************************ 00:06:29.549 START TEST app_cmdline 00:06:29.549 ************************************ 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.549 * Looking for test storage... 00:06:29.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.549 03:06:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.549 --rc genhtml_branch_coverage=1 00:06:29.549 --rc genhtml_function_coverage=1 00:06:29.549 --rc genhtml_legend=1 00:06:29.549 --rc geninfo_all_blocks=1 00:06:29.549 --rc geninfo_unexecuted_blocks=1 00:06:29.549 00:06:29.549 ' 00:06:29.549 03:06:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.549 03:06:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72012 00:06:29.549 03:06:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72012 00:06:29.549 03:06:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.549 03:06:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72012 ']' 00:06:29.550 03:06:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.550 03:06:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.550 03:06:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.550 03:06:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.550 03:06:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.550 [2024-11-18 03:06:33.061810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.550 [2024-11-18 03:06:33.061921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72012 ] 00:06:29.808 [2024-11-18 03:06:33.207140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.808 [2024-11-18 03:06:33.239667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.375 03:06:33 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.375 03:06:33 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:30.375 03:06:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:30.633 { 00:06:30.633 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:30.633 "fields": { 00:06:30.633 "major": 24, 00:06:30.633 "minor": 9, 00:06:30.633 "patch": 1, 00:06:30.633 "suffix": "-pre", 00:06:30.633 "commit": "b18e1bd62" 00:06:30.633 } 00:06:30.633 } 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.633 03:06:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:30.633 03:06:34 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.909 request: 00:06:30.909 { 00:06:30.909 "method": "env_dpdk_get_mem_stats", 00:06:30.909 "req_id": 1 00:06:30.909 } 00:06:30.909 Got JSON-RPC error response 00:06:30.909 response: 00:06:30.909 { 00:06:30.909 "code": -32601, 00:06:30.909 "message": "Method not found" 00:06:30.909 } 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.909 03:06:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72012 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72012 ']' 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72012 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72012 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.909 killing process with pid 72012 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72012' 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@969 -- # kill 72012 00:06:30.909 03:06:34 app_cmdline -- common/autotest_common.sh@974 -- # wait 72012 00:06:31.234 00:06:31.234 real 0m1.761s 00:06:31.234 user 0m2.097s 00:06:31.234 sys 0m0.390s 00:06:31.234 ************************************ 00:06:31.234 END TEST app_cmdline 00:06:31.234 ************************************ 00:06:31.234 03:06:34 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.234 03:06:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.234 03:06:34 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.234 03:06:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.234 03:06:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.234 03:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:31.234 ************************************ 00:06:31.234 START TEST version 00:06:31.234 ************************************ 00:06:31.234 03:06:34 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.234 * Looking for test storage... 00:06:31.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.234 03:06:34 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.234 03:06:34 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.234 03:06:34 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.234 03:06:34 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.234 03:06:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.234 03:06:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.234 03:06:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.234 03:06:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.234 03:06:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.234 03:06:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.234 03:06:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.234 03:06:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.234 03:06:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.234 03:06:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.234 03:06:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.234 03:06:34 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.234 03:06:34 version -- scripts/common.sh@345 -- # : 1 00:06:31.234 03:06:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.234 03:06:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.234 03:06:34 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.234 03:06:34 version -- scripts/common.sh@353 -- # local d=1 00:06:31.234 03:06:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.234 03:06:34 version -- scripts/common.sh@355 -- # echo 1 00:06:31.234 03:06:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.234 03:06:34 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.234 03:06:34 version -- scripts/common.sh@353 -- # local d=2 00:06:31.234 03:06:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.493 03:06:34 version -- scripts/common.sh@355 -- # echo 2 00:06:31.493 03:06:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.493 03:06:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.493 03:06:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.493 03:06:34 version -- scripts/common.sh@368 -- # return 0 00:06:31.493 03:06:34 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.493 03:06:34 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.493 --rc genhtml_branch_coverage=1 00:06:31.493 --rc genhtml_function_coverage=1 00:06:31.493 --rc genhtml_legend=1 00:06:31.493 --rc geninfo_all_blocks=1 00:06:31.493 --rc geninfo_unexecuted_blocks=1 00:06:31.493 00:06:31.493 ' 00:06:31.493 03:06:34 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.493 --rc genhtml_branch_coverage=1 00:06:31.493 --rc genhtml_function_coverage=1 00:06:31.493 --rc genhtml_legend=1 00:06:31.493 --rc geninfo_all_blocks=1 00:06:31.493 --rc geninfo_unexecuted_blocks=1 00:06:31.493 00:06:31.493 ' 00:06:31.493 03:06:34 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.493 --rc genhtml_branch_coverage=1 00:06:31.493 --rc genhtml_function_coverage=1 00:06:31.493 --rc genhtml_legend=1 00:06:31.493 --rc geninfo_all_blocks=1 00:06:31.493 --rc geninfo_unexecuted_blocks=1 00:06:31.493 00:06:31.493 ' 00:06:31.493 03:06:34 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.493 --rc genhtml_branch_coverage=1 00:06:31.493 --rc genhtml_function_coverage=1 00:06:31.493 --rc genhtml_legend=1 00:06:31.493 --rc geninfo_all_blocks=1 00:06:31.493 --rc geninfo_unexecuted_blocks=1 00:06:31.493 00:06:31.493 ' 00:06:31.493 03:06:34 version -- app/version.sh@17 -- # get_header_version major 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # cut -f2 00:06:31.493 03:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.493 03:06:34 version -- app/version.sh@17 -- # major=24 00:06:31.493 03:06:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.493 03:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # cut -f2 00:06:31.493 03:06:34 version -- app/version.sh@18 -- # minor=9 00:06:31.493 03:06:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.493 03:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # cut -f2 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.493 03:06:34 version -- app/version.sh@19 -- # patch=1 00:06:31.493 03:06:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.493 03:06:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # cut -f2 00:06:31.493 03:06:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.493 03:06:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.493 03:06:34 version -- app/version.sh@22 -- # version=24.9 00:06:31.493 03:06:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.493 03:06:34 version -- app/version.sh@25 -- # version=24.9.1 00:06:31.493 03:06:34 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:31.493 03:06:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.493 03:06:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.493 03:06:34 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:31.493 03:06:34 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:31.493 00:06:31.493 real 0m0.199s 00:06:31.493 user 0m0.121s 00:06:31.493 sys 0m0.101s 00:06:31.493 03:06:34 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.493 ************************************ 00:06:31.493 END TEST version 00:06:31.493 ************************************ 00:06:31.493 03:06:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.493 03:06:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.493 03:06:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:31.493 03:06:34 -- spdk/autotest.sh@194 -- # uname -s 00:06:31.493 03:06:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:31.493 03:06:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.493 03:06:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.493 03:06:34 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:31.493 03:06:34 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.493 03:06:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.493 03:06:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.493 03:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:31.493 ************************************ 00:06:31.493 START TEST blockdev_nvme 00:06:31.493 ************************************ 00:06:31.493 03:06:34 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.493 * Looking for test storage... 00:06:31.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.493 03:06:34 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.493 03:06:34 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.493 03:06:34 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.493 03:06:35 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.494 03:06:35 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.752 03:06:35 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.752 --rc genhtml_branch_coverage=1 00:06:31.752 --rc genhtml_function_coverage=1 00:06:31.752 --rc genhtml_legend=1 00:06:31.752 --rc geninfo_all_blocks=1 00:06:31.752 --rc geninfo_unexecuted_blocks=1 00:06:31.752 00:06:31.752 ' 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.752 --rc genhtml_branch_coverage=1 00:06:31.752 --rc genhtml_function_coverage=1 00:06:31.752 --rc genhtml_legend=1 00:06:31.752 --rc geninfo_all_blocks=1 00:06:31.752 --rc geninfo_unexecuted_blocks=1 00:06:31.752 00:06:31.752 ' 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.752 --rc genhtml_branch_coverage=1 00:06:31.752 --rc genhtml_function_coverage=1 00:06:31.752 --rc genhtml_legend=1 00:06:31.752 --rc geninfo_all_blocks=1 00:06:31.752 --rc geninfo_unexecuted_blocks=1 00:06:31.752 00:06:31.752 ' 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.752 --rc genhtml_branch_coverage=1 00:06:31.752 --rc genhtml_function_coverage=1 00:06:31.752 --rc genhtml_legend=1 00:06:31.752 --rc geninfo_all_blocks=1 00:06:31.752 --rc geninfo_unexecuted_blocks=1 00:06:31.752 00:06:31.752 ' 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.752 03:06:35 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72174 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72174 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72174 ']' 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.752 03:06:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.752 03:06:35 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:31.752 [2024-11-18 03:06:35.158931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.752 [2024-11-18 03:06:35.159049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72174 ] 00:06:31.752 [2024-11-18 03:06:35.311476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.011 [2024-11-18 03:06:35.348218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.578 03:06:36 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.578 03:06:36 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:32.578 03:06:36 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:32.578 03:06:36 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:32.578 03:06:36 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:32.578 03:06:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:32.578 03:06:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:32.578 03:06:36 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:32.578 03:06:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.578 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.838 03:06:36 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.838 03:06:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:32.838 03:06:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.838 03:06:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.838 03:06:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.838 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3780d4c9-5f0f-486e-8a6b-0c435ee03db8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3780d4c9-5f0f-486e-8a6b-0c435ee03db8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2da3c3e4-ce90-4343-b4d7-20ecf0707de1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2da3c3e4-ce90-4343-b4d7-20ecf0707de1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8d4fa3f6-7311-47d5-9d46-0a5cd506655e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8d4fa3f6-7311-47d5-9d46-0a5cd506655e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ef70844d-59b6-43a9-8639-ea043149e9a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef70844d-59b6-43a9-8639-ea043149e9a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fddb0bcf-efca-4765-a72b-70943fc49a39"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fddb0bcf-efca-4765-a72b-70943fc49a39",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b3292a3d-e6c5-4859-9268-5090f7969455"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b3292a3d-e6c5-4859-9268-5090f7969455",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:33.100 03:06:36 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72174 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72174 ']' 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72174 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72174 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.100 killing process with pid 72174 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72174' 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72174 00:06:33.100 03:06:36 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72174 00:06:33.359 03:06:36 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:33.359 03:06:36 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.359 03:06:36 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:33.359 03:06:36 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.359 03:06:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.359 ************************************ 00:06:33.359 START TEST bdev_hello_world 00:06:33.359 ************************************ 00:06:33.359 03:06:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.359 [2024-11-18 03:06:36.880573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.359 [2024-11-18 03:06:36.880691] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72241 ] 00:06:33.619 [2024-11-18 03:06:37.030293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.619 [2024-11-18 03:06:37.072613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.186 [2024-11-18 03:06:37.463739] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:34.186 [2024-11-18 03:06:37.463799] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:34.186 [2024-11-18 03:06:37.463826] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:34.186 [2024-11-18 03:06:37.466201] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:34.186 [2024-11-18 03:06:37.467122] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:34.186 [2024-11-18 03:06:37.467181] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:34.186 [2024-11-18 03:06:37.467750] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:34.186 00:06:34.186 [2024-11-18 03:06:37.467799] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:34.186 00:06:34.186 real 0m0.842s 00:06:34.186 user 0m0.546s 00:06:34.186 sys 0m0.188s 00:06:34.186 03:06:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.186 ************************************ 00:06:34.186 END TEST bdev_hello_world 00:06:34.186 ************************************ 00:06:34.186 03:06:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:34.186 03:06:37 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:34.186 03:06:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.186 03:06:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.186 03:06:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.186 ************************************ 00:06:34.186 START TEST bdev_bounds 00:06:34.186 ************************************ 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72272 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.186 Process bdevio pid: 72272 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72272' 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72272 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72272 ']' 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:34.186 03:06:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:34.444 [2024-11-18 03:06:37.796940] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.444 [2024-11-18 03:06:37.797078] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72272 ] 00:06:34.444 [2024-11-18 03:06:37.951485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.444 [2024-11-18 03:06:38.002386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.444 [2024-11-18 03:06:38.002594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.444 [2024-11-18 03:06:38.002661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.386 03:06:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.386 03:06:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:35.386 03:06:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:35.386 I/O targets: 00:06:35.386 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:35.386 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:35.386 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.386 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.386 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.386 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:35.386 00:06:35.386 00:06:35.386 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.386 http://cunit.sourceforge.net/ 00:06:35.386 00:06:35.386 00:06:35.386 Suite: bdevio tests on: Nvme3n1 00:06:35.386 Test: blockdev write read block ...passed 00:06:35.386 Test: blockdev write zeroes read block ...passed 00:06:35.386 Test: blockdev write zeroes read no split ...passed 00:06:35.386 Test: blockdev write zeroes read split ...passed 00:06:35.386 Test: blockdev write zeroes read split partial ...passed 00:06:35.386 Test: blockdev reset ...[2024-11-18 03:06:38.784422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:35.386 passed 00:06:35.386 Test: blockdev write read 8 blocks ...[2024-11-18 03:06:38.788262] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:35.386 passed 00:06:35.386 Test: blockdev write read size > 128k ...passed 00:06:35.386 Test: blockdev write read invalid size ...passed 00:06:35.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.386 Test: blockdev write read max offset ...passed 00:06:35.386 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.386 Test: blockdev writev readv 8 blocks ...passed 00:06:35.386 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.386 Test: blockdev writev readv block ...passed 00:06:35.386 Test: blockdev writev readv size > 128k ...passed 00:06:35.386 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.386 Test: blockdev comparev and writev ...[2024-11-18 03:06:38.804416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ab606000 len:0x1000 00:06:35.386 [2024-11-18 03:06:38.804472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.386 passed 00:06:35.386 Test: blockdev nvme passthru rw ...passed 00:06:35.387 Test: blockdev nvme passthru vendor specific ...passed 00:06:35.387 Test: blockdev nvme admin passthru ...[2024-11-18 03:06:38.807746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.387 [2024-11-18 03:06:38.807808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.387 passed 00:06:35.387 Test: blockdev copy ...passed 00:06:35.387 Suite: bdevio tests on: Nvme2n3 00:06:35.387 Test: blockdev write read block ...passed 00:06:35.387 Test: blockdev write zeroes read block ...passed 00:06:35.387 Test: blockdev write zeroes read no split ...passed 00:06:35.387 Test: blockdev write zeroes read split ...passed 00:06:35.387 Test: blockdev write zeroes read split partial ...passed 00:06:35.387 Test: blockdev reset ...[2024-11-18 03:06:38.837526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:35.387 passed 00:06:35.387 Test: blockdev write read 8 blocks ...[2024-11-18 03:06:38.841486] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:35.387 passed 00:06:35.387 Test: blockdev write read size > 128k ...passed 00:06:35.387 Test: blockdev write read invalid size ...passed 00:06:35.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.387 Test: blockdev write read max offset ...passed 00:06:35.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.387 Test: blockdev writev readv 8 blocks ...passed 00:06:35.387 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.387 Test: blockdev writev readv block ...passed 00:06:35.387 Test: blockdev writev readv size > 128k ...passed 00:06:35.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.387 Test: blockdev comparev and writev ...[2024-11-18 03:06:38.860713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2df605000 len:0x1000 00:06:35.387 [2024-11-18 03:06:38.860778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.387 passed 00:06:35.387 Test: blockdev nvme passthru rw ...passed 00:06:35.387 Test: blockdev nvme passthru vendor specific ...passed 00:06:35.387 Test: blockdev nvme admin passthru ...[2024-11-18 03:06:38.862894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.387 [2024-11-18 03:06:38.862934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.387 passed 00:06:35.387 Test: blockdev copy ...passed 00:06:35.387 Suite: bdevio tests on: Nvme2n2 00:06:35.387 Test: blockdev write read block ...passed 00:06:35.387 Test: blockdev write zeroes read block ...passed 00:06:35.387 Test: blockdev write zeroes read no split ...passed 00:06:35.387 Test: blockdev write zeroes read split ...passed 00:06:35.387 Test: blockdev write zeroes read split partial ...passed 00:06:35.387 Test: blockdev reset ...[2024-11-18 03:06:38.890707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:35.387 passed 00:06:35.387 Test: blockdev write read 8 blocks ...[2024-11-18 03:06:38.894897] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:35.387 passed 00:06:35.387 Test: blockdev write read size > 128k ...passed 00:06:35.387 Test: blockdev write read invalid size ...passed 00:06:35.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.387 Test: blockdev write read max offset ...passed 00:06:35.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.387 Test: blockdev writev readv 8 blocks ...passed 00:06:35.387 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.387 Test: blockdev writev readv block ...passed 00:06:35.387 Test: blockdev writev readv size > 128k ...passed 00:06:35.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.387 Test: blockdev comparev and writev ...[2024-11-18 03:06:38.911827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfa36000 len:0x1000 00:06:35.387 [2024-11-18 03:06:38.911891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.387 passed 00:06:35.387 Test: blockdev nvme passthru rw ...passed 00:06:35.387 Test: blockdev nvme passthru vendor specific ...[2024-11-18 03:06:38.914401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.387 [2024-11-18 03:06:38.914440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.387 passed 00:06:35.387 Test: blockdev nvme admin passthru ...passed 00:06:35.387 Test: blockdev copy ...passed 00:06:35.387 Suite: bdevio tests on: Nvme2n1 00:06:35.387 Test: blockdev write read block ...passed 00:06:35.387 Test: blockdev write zeroes read block ...passed 00:06:35.387 Test: blockdev write zeroes read no split ...passed 00:06:35.387 Test: blockdev write zeroes read split ...passed 00:06:35.387 Test: blockdev write zeroes read split partial ...passed 00:06:35.387 Test: blockdev reset ...[2024-11-18 03:06:38.943144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:35.387 passed 00:06:35.387 Test: blockdev write read 8 blocks ...[2024-11-18 03:06:38.946742] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:35.387 passed 00:06:35.387 Test: blockdev write read size > 128k ...passed 00:06:35.387 Test: blockdev write read invalid size ...passed 00:06:35.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.387 Test: blockdev write read max offset ...passed 00:06:35.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.387 Test: blockdev writev readv 8 blocks ...passed 00:06:35.387 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.387 Test: blockdev writev readv block ...passed 00:06:35.387 Test: blockdev writev readv size > 128k ...passed 00:06:35.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.646 Test: blockdev comparev and writev ...[2024-11-18 03:06:38.962545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfa30000 len:0x1000 00:06:35.646 [2024-11-18 03:06:38.962609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.646 passed 00:06:35.646 Test: blockdev nvme passthru rw ...passed 00:06:35.646 Test: blockdev nvme passthru vendor specific ...[2024-11-18 03:06:38.965058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.646 passed 00:06:35.646 Test: blockdev nvme admin passthru ...[2024-11-18 03:06:38.965100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.646 passed 00:06:35.646 Test: blockdev copy ...passed 00:06:35.646 Suite: bdevio tests on: Nvme1n1 00:06:35.646 Test: blockdev write read block ...passed 00:06:35.646 Test: blockdev write zeroes read block ...passed 00:06:35.646 Test: blockdev write zeroes read no split ...passed 00:06:35.646 Test: blockdev write zeroes read split ...passed 00:06:35.646 Test: blockdev write zeroes read split partial ...passed 00:06:35.646 Test: blockdev reset ...[2024-11-18 03:06:38.996789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:35.646 passed 00:06:35.646 Test: blockdev write read 8 blocks ...[2024-11-18 03:06:38.999725] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:35.646 passed 00:06:35.646 Test: blockdev write read size > 128k ...passed 00:06:35.646 Test: blockdev write read invalid size ...passed 00:06:35.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.646 Test: blockdev write read max offset ...passed 00:06:35.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.646 Test: blockdev writev readv 8 blocks ...passed 00:06:35.646 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.646 Test: blockdev writev readv block ...passed 00:06:35.646 Test: blockdev writev readv size > 128k ...passed 00:06:35.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.646 Test: blockdev comparev and writev ...[2024-11-18 03:06:39.015774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfa2c000 len:0x1000 00:06:35.646 [2024-11-18 03:06:39.015833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.646 passed 00:06:35.646 Test: blockdev nvme passthru rw ...passed 00:06:35.646 Test: blockdev nvme passthru vendor specific ...passed 00:06:35.646 Test: blockdev nvme admin passthru ...[2024-11-18 03:06:39.018420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.646 [2024-11-18 03:06:39.018460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.646 passed 00:06:35.646 Test: blockdev copy ...passed 00:06:35.646 Suite: bdevio tests on: Nvme0n1 00:06:35.646 Test: blockdev write read block ...passed 00:06:35.646 Test: blockdev write zeroes read block ...passed 00:06:35.646 Test: blockdev write zeroes read no split ...passed 00:06:35.646 Test: blockdev write zeroes read split ...passed 00:06:35.647 Test: blockdev write zeroes read split partial ...passed 00:06:35.647 Test: blockdev reset ...[2024-11-18 03:06:39.050710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:35.647 [2024-11-18 03:06:39.053024] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:35.647 passed 00:06:35.647 Test: blockdev write read 8 blocks ...passed 00:06:35.647 Test: blockdev write read size > 128k ...passed 00:06:35.647 Test: blockdev write read invalid size ...passed 00:06:35.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.647 Test: blockdev write read max offset ...passed 00:06:35.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.647 Test: blockdev writev readv 8 blocks ...passed 00:06:35.647 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.647 Test: blockdev writev readv block ...passed 00:06:35.647 Test: blockdev writev readv size > 128k ...passed 00:06:35.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.647 Test: blockdev comparev and writev ...passed 00:06:35.647 Test: blockdev nvme passthru rw ...[2024-11-18 03:06:39.067660] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:35.647 separate metadata which is not supported yet. 00:06:35.647 passed 00:06:35.647 Test: blockdev nvme passthru vendor specific ...[2024-11-18 03:06:39.069246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:35.647 [2024-11-18 03:06:39.069298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:35.647 passed 00:06:35.647 Test: blockdev nvme admin passthru ...passed 00:06:35.647 Test: blockdev copy ...passed 00:06:35.647 00:06:35.647 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.647 suites 6 6 n/a 0 0 00:06:35.647 tests 138 138 138 0 0 00:06:35.647 asserts 893 893 893 0 n/a 00:06:35.647 00:06:35.647 Elapsed time = 0.704 seconds 00:06:35.647 0 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72272 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72272 ']' 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72272 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72272 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.647 killing process with pid 72272 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72272' 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72272 00:06:35.647 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72272 00:06:35.907 03:06:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:35.907 00:06:35.907 real 0m1.583s 00:06:35.907 user 0m3.846s 00:06:35.907 sys 0m0.349s 00:06:35.907 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.907 03:06:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:35.907 ************************************ 00:06:35.907 END TEST bdev_bounds 00:06:35.907 ************************************ 00:06:35.907 03:06:39 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.907 03:06:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:35.907 03:06:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.907 03:06:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.907 ************************************ 00:06:35.907 START TEST bdev_nbd 00:06:35.907 ************************************ 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72326 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72326 /var/tmp/spdk-nbd.sock 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72326 ']' 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.907 03:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:35.907 [2024-11-18 03:06:39.449068] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.907 [2024-11-18 03:06:39.449213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.168 [2024-11-18 03:06:39.603365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.168 [2024-11-18 03:06:39.652562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:36.738 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.996 1+0 records in 00:06:36.996 1+0 records out 00:06:36.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121467 s, 3.4 MB/s 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.996 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.255 1+0 records in 00:06:37.255 1+0 records out 00:06:37.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455163 s, 9.0 MB/s 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.255 03:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.514 1+0 records in 00:06:37.514 1+0 records out 00:06:37.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450912 s, 9.1 MB/s 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.514 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.776 1+0 records in 00:06:37.776 1+0 records out 00:06:37.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342649 s, 12.0 MB/s 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.776 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.037 1+0 records in 00:06:38.037 1+0 records out 00:06:38.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751679 s, 5.4 MB/s 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.037 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.298 1+0 records in 00:06:38.298 1+0 records out 00:06:38.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113649 s, 3.6 MB/s 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.298 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.558 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd0", 00:06:38.558 "bdev_name": "Nvme0n1" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd1", 00:06:38.558 "bdev_name": "Nvme1n1" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd2", 00:06:38.558 "bdev_name": "Nvme2n1" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd3", 00:06:38.558 "bdev_name": "Nvme2n2" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd4", 00:06:38.558 "bdev_name": "Nvme2n3" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd5", 00:06:38.558 "bdev_name": "Nvme3n1" 00:06:38.558 } 00:06:38.558 ]' 00:06:38.558 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:38.558 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd0", 00:06:38.558 "bdev_name": "Nvme0n1" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd1", 00:06:38.558 "bdev_name": "Nvme1n1" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd2", 00:06:38.558 "bdev_name": "Nvme2n1" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd3", 00:06:38.558 "bdev_name": "Nvme2n2" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd4", 00:06:38.558 "bdev_name": "Nvme2n3" 00:06:38.558 }, 00:06:38.558 { 00:06:38.558 "nbd_device": "/dev/nbd5", 00:06:38.558 "bdev_name": "Nvme3n1" 00:06:38.558 } 00:06:38.558 ]' 00:06:38.558 03:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.558 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.819 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.079 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.338 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.596 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:39.596 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.596 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.596 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.596 03:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.596 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.854 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.112 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:40.371 /dev/nbd0 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.371 1+0 records in 00:06:40.371 1+0 records out 00:06:40.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120311 s, 3.4 MB/s 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.371 03:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:40.628 /dev/nbd1 00:06:40.628 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.628 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.628 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:40.628 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.629 1+0 records in 00:06:40.629 1+0 records out 00:06:40.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000865307 s, 4.7 MB/s 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.629 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:40.886 /dev/nbd10 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.886 1+0 records in 00:06:40.886 1+0 records out 00:06:40.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010418 s, 3.9 MB/s 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:40.886 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:40.887 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.887 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.887 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:41.145 /dev/nbd11 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.145 1+0 records in 00:06:41.145 1+0 records out 00:06:41.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689142 s, 5.9 MB/s 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:41.145 /dev/nbd12 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.145 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.403 1+0 records in 00:06:41.403 1+0 records out 00:06:41.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012789 s, 3.2 MB/s 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:41.403 /dev/nbd13 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.403 1+0 records in 00:06:41.403 1+0 records out 00:06:41.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010552 s, 3.9 MB/s 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.403 03:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd0", 00:06:41.662 "bdev_name": "Nvme0n1" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd1", 00:06:41.662 "bdev_name": "Nvme1n1" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd10", 00:06:41.662 "bdev_name": "Nvme2n1" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd11", 00:06:41.662 "bdev_name": "Nvme2n2" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd12", 00:06:41.662 "bdev_name": "Nvme2n3" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd13", 00:06:41.662 "bdev_name": "Nvme3n1" 00:06:41.662 } 00:06:41.662 ]' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd0", 00:06:41.662 "bdev_name": "Nvme0n1" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd1", 00:06:41.662 "bdev_name": "Nvme1n1" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd10", 00:06:41.662 "bdev_name": "Nvme2n1" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd11", 00:06:41.662 "bdev_name": "Nvme2n2" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd12", 00:06:41.662 "bdev_name": "Nvme2n3" 00:06:41.662 }, 00:06:41.662 { 00:06:41.662 "nbd_device": "/dev/nbd13", 00:06:41.662 "bdev_name": "Nvme3n1" 00:06:41.662 } 00:06:41.662 ]' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.662 /dev/nbd1 00:06:41.662 /dev/nbd10 00:06:41.662 /dev/nbd11 00:06:41.662 /dev/nbd12 00:06:41.662 /dev/nbd13' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.662 /dev/nbd1 00:06:41.662 /dev/nbd10 00:06:41.662 /dev/nbd11 00:06:41.662 /dev/nbd12 00:06:41.662 /dev/nbd13' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:41.662 256+0 records in 00:06:41.662 256+0 records out 00:06:41.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00943548 s, 111 MB/s 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.662 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.920 256+0 records in 00:06:41.920 256+0 records out 00:06:41.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133282 s, 7.9 MB/s 00:06:41.920 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.920 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.178 256+0 records in 00:06:42.178 256+0 records out 00:06:42.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172672 s, 6.1 MB/s 00:06:42.178 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.178 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:42.178 256+0 records in 00:06:42.178 256+0 records out 00:06:42.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167062 s, 6.3 MB/s 00:06:42.178 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.178 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:42.438 256+0 records in 00:06:42.439 256+0 records out 00:06:42.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140575 s, 7.5 MB/s 00:06:42.439 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.439 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:42.439 256+0 records in 00:06:42.439 256+0 records out 00:06:42.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0858664 s, 12.2 MB/s 00:06:42.439 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.439 03:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:42.709 256+0 records in 00:06:42.710 256+0 records out 00:06:42.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0849587 s, 12.3 MB/s 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.710 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.971 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.229 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.487 03:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.745 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:44.003 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:44.260 malloc_lvol_verify 00:06:44.260 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:44.517 55a3850e-2b13-447a-9392-0a17f5c083b4 00:06:44.517 03:06:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:44.776 3ca0e1c0-83dd-4bc0-b766-9568ca33d503 00:06:44.776 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:44.776 /dev/nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:45.034 mke2fs 1.47.0 (5-Feb-2023) 00:06:45.034 Discarding device blocks: 0/4096 done 00:06:45.034 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:45.034 00:06:45.034 Allocating group tables: 0/1 done 00:06:45.034 Writing inode tables: 0/1 done 00:06:45.034 Creating journal (1024 blocks): done 00:06:45.034 Writing superblocks and filesystem accounting information: 0/1 done 00:06:45.034 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72326 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72326 ']' 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72326 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.034 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72326 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72326' 00:06:45.292 killing process with pid 72326 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72326 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72326 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:45.292 00:06:45.292 real 0m9.406s 00:06:45.292 user 0m13.531s 00:06:45.292 sys 0m3.101s 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.292 03:06:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:45.292 ************************************ 00:06:45.292 END TEST bdev_nbd 00:06:45.292 ************************************ 00:06:45.292 03:06:48 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:45.292 03:06:48 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:45.292 skipping fio tests on NVMe due to multi-ns failures. 00:06:45.292 03:06:48 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:45.292 03:06:48 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:45.292 03:06:48 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:45.292 03:06:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:45.292 03:06:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.292 03:06:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.292 ************************************ 00:06:45.292 START TEST bdev_verify 00:06:45.292 ************************************ 00:06:45.293 03:06:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:45.550 [2024-11-18 03:06:48.899702] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.551 [2024-11-18 03:06:48.899821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72694 ] 00:06:45.551 [2024-11-18 03:06:49.047800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.551 [2024-11-18 03:06:49.078886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.551 [2024-11-18 03:06:49.079000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.117 Running I/O for 5 seconds... 00:06:48.428 24064.00 IOPS, 94.00 MiB/s [2024-11-18T03:06:52.940Z] 24608.00 IOPS, 96.12 MiB/s [2024-11-18T03:06:53.875Z] 24042.67 IOPS, 93.92 MiB/s [2024-11-18T03:06:54.809Z] 24256.00 IOPS, 94.75 MiB/s [2024-11-18T03:06:54.809Z] 23372.80 IOPS, 91.30 MiB/s 00:06:51.232 Latency(us) 00:06:51.232 [2024-11-18T03:06:54.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:51.232 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x0 length 0xbd0bd 00:06:51.232 Nvme0n1 : 5.05 1899.27 7.42 0.00 0.00 67240.73 9981.64 74610.22 00:06:51.232 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:51.232 Nvme0n1 : 5.07 1968.58 7.69 0.00 0.00 64298.35 6049.48 67754.14 00:06:51.232 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x0 length 0xa0000 00:06:51.232 Nvme1n1 : 5.06 1898.66 7.42 0.00 0.00 67166.48 12451.84 70173.93 00:06:51.232 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0xa0000 length 0xa0000 00:06:51.232 Nvme1n1 : 5.04 1954.57 7.64 0.00 0.00 65299.54 10132.87 65737.65 00:06:51.232 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x0 length 0x80000 00:06:51.232 Nvme2n1 : 5.06 1898.14 7.41 0.00 0.00 67071.53 13308.85 70173.93 00:06:51.232 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x80000 length 0x80000 00:06:51.232 Nvme2n1 : 5.04 1953.95 7.63 0.00 0.00 65183.52 12250.19 64527.75 00:06:51.232 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x0 length 0x80000 00:06:51.232 Nvme2n2 : 5.06 1897.45 7.41 0.00 0.00 66973.33 13712.15 68560.74 00:06:51.232 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x80000 length 0x80000 00:06:51.232 Nvme2n2 : 5.05 1953.41 7.63 0.00 0.00 65069.92 13006.38 64931.05 00:06:51.232 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x0 length 0x80000 00:06:51.232 Nvme2n3 : 5.06 1895.99 7.41 0.00 0.00 66893.01 12552.66 66140.95 00:06:51.232 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x80000 length 0x80000 00:06:51.232 Nvme2n3 : 5.06 1960.63 7.66 0.00 0.00 64701.25 5671.38 68560.74 00:06:51.232 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x0 length 0x20000 00:06:51.232 Nvme3n1 : 5.06 1895.37 7.40 0.00 0.00 66785.91 8267.62 68157.44 00:06:51.232 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.232 Verification LBA range: start 0x20000 length 0x20000 00:06:51.232 Nvme3n1 : 5.07 1969.14 7.69 0.00 0.00 64379.11 5747.00 72190.42 00:06:51.232 [2024-11-18T03:06:54.809Z] =================================================================================================================== 00:06:51.232 [2024-11-18T03:06:54.809Z] Total : 23145.15 90.41 0.00 0.00 65903.33 5671.38 74610.22 00:06:51.798 00:06:51.798 real 0m6.244s 00:06:51.798 user 0m11.785s 00:06:51.798 sys 0m0.198s 00:06:51.798 03:06:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.798 ************************************ 00:06:51.798 END TEST bdev_verify 00:06:51.798 ************************************ 00:06:51.798 03:06:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.798 03:06:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:51.798 03:06:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:51.798 03:06:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.798 03:06:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.798 ************************************ 00:06:51.798 START TEST bdev_verify_big_io 00:06:51.798 ************************************ 00:06:51.798 03:06:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:51.798 [2024-11-18 03:06:55.221462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.798 [2024-11-18 03:06:55.221564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:06:52.057 [2024-11-18 03:06:55.376153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.057 [2024-11-18 03:06:55.410155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.057 [2024-11-18 03:06:55.410194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.318 Running I/O for 5 seconds... 00:06:56.779 1281.00 IOPS, 80.06 MiB/s [2024-11-18T03:07:01.733Z] 2337.00 IOPS, 146.06 MiB/s [2024-11-18T03:07:01.733Z] 2300.67 IOPS, 143.79 MiB/s [2024-11-18T03:07:01.990Z] 2231.25 IOPS, 139.45 MiB/s 00:06:58.413 Latency(us) 00:06:58.413 [2024-11-18T03:07:01.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x0 length 0xbd0b 00:06:58.414 Nvme0n1 : 5.70 134.65 8.42 0.00 0.00 920445.24 29642.44 1084066.26 00:06:58.414 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:58.414 Nvme0n1 : 5.70 134.81 8.43 0.00 0.00 904946.61 104857.60 803370.54 00:06:58.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x0 length 0xa000 00:06:58.414 Nvme1n1 : 5.71 134.56 8.41 0.00 0.00 886670.57 108890.58 877577.45 00:06:58.414 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0xa000 length 0xa000 00:06:58.414 Nvme1n1 : 5.70 134.76 8.42 0.00 0.00 883678.92 105664.20 825955.25 00:06:58.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x0 length 0x8000 00:06:58.414 Nvme2n1 : 5.79 129.66 8.10 0.00 0.00 888059.72 81869.59 1529307.77 00:06:58.414 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x8000 length 0x8000 00:06:58.414 Nvme2n1 : 5.70 134.72 8.42 0.00 0.00 859434.27 106470.79 845313.58 00:06:58.414 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x0 length 0x8000 00:06:58.414 Nvme2n2 : 5.86 139.45 8.72 0.00 0.00 802093.01 28029.24 1535760.54 00:06:58.414 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x8000 length 0x8000 00:06:58.414 Nvme2n2 : 5.78 143.90 8.99 0.00 0.00 785157.64 41136.44 877577.45 00:06:58.414 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x0 length 0x8000 00:06:58.414 Nvme2n3 : 5.87 149.04 9.31 0.00 0.00 729784.50 9779.99 1580929.97 00:06:58.414 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x8000 length 0x8000 00:06:58.414 Nvme2n3 : 5.79 154.78 9.67 0.00 0.00 711182.60 1115.37 922746.88 00:06:58.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x0 length 0x2000 00:06:58.414 Nvme3n1 : 5.94 192.26 12.02 0.00 0.00 553465.52 579.74 1180857.90 00:06:58.414 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:58.414 Verification LBA range: start 0x2000 length 0x2000 00:06:58.414 Nvme3n1 : 5.69 130.35 8.15 0.00 0.00 942790.05 36498.51 877577.45 00:06:58.414 [2024-11-18T03:07:01.991Z] =================================================================================================================== 00:06:58.414 [2024-11-18T03:07:01.991Z] Total : 1712.94 107.06 0.00 0.00 808939.64 579.74 1580929.97 00:06:59.422 ************************************ 00:06:59.422 END TEST bdev_verify_big_io 00:06:59.422 ************************************ 00:06:59.422 00:06:59.422 real 0m7.468s 00:06:59.422 user 0m14.177s 00:06:59.422 sys 0m0.227s 00:06:59.422 03:07:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.422 03:07:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 03:07:02 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.422 03:07:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:59.422 03:07:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.422 03:07:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 ************************************ 00:06:59.422 START TEST bdev_write_zeroes 00:06:59.422 ************************************ 00:06:59.422 03:07:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.422 [2024-11-18 03:07:02.741718] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.422 [2024-11-18 03:07:02.741844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72885 ] 00:06:59.422 [2024-11-18 03:07:02.891539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.422 [2024-11-18 03:07:02.946329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.991 Running I/O for 1 seconds... 00:07:00.924 59136.00 IOPS, 231.00 MiB/s 00:07:00.924 Latency(us) 00:07:00.924 [2024-11-18T03:07:04.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.924 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.924 Nvme0n1 : 1.02 9843.87 38.45 0.00 0.00 12969.55 5747.00 24197.91 00:07:00.924 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.924 Nvme1n1 : 1.02 9832.45 38.41 0.00 0.00 12974.48 8721.33 22685.54 00:07:00.924 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.924 Nvme2n1 : 1.02 9821.19 38.36 0.00 0.00 12949.95 8620.50 21778.12 00:07:00.924 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.924 Nvme2n2 : 1.02 9809.99 38.32 0.00 0.00 12940.37 8771.74 21273.99 00:07:00.924 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.924 Nvme2n3 : 1.03 9798.48 38.28 0.00 0.00 12915.14 8570.09 20971.52 00:07:00.924 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.924 Nvme3n1 : 1.03 9787.30 38.23 0.00 0.00 12884.68 8519.68 20366.57 00:07:00.924 [2024-11-18T03:07:04.501Z] =================================================================================================================== 00:07:00.924 [2024-11-18T03:07:04.501Z] Total : 58893.28 230.05 0.00 0.00 12939.03 5747.00 24197.91 00:07:01.183 00:07:01.183 real 0m1.869s 00:07:01.183 user 0m1.580s 00:07:01.183 sys 0m0.175s 00:07:01.183 03:07:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.183 ************************************ 00:07:01.183 END TEST bdev_write_zeroes 00:07:01.183 ************************************ 00:07:01.183 03:07:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:01.183 03:07:04 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.183 03:07:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:01.183 03:07:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.183 03:07:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.183 ************************************ 00:07:01.183 START TEST bdev_json_nonenclosed 00:07:01.183 ************************************ 00:07:01.183 03:07:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.183 [2024-11-18 03:07:04.680204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.183 [2024-11-18 03:07:04.680342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72923 ] 00:07:01.442 [2024-11-18 03:07:04.829845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.442 [2024-11-18 03:07:04.864791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.443 [2024-11-18 03:07:04.864871] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:01.443 [2024-11-18 03:07:04.864886] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:01.443 [2024-11-18 03:07:04.864897] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.443 00:07:01.443 real 0m0.329s 00:07:01.443 user 0m0.130s 00:07:01.443 sys 0m0.095s 00:07:01.443 03:07:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.443 03:07:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:01.443 ************************************ 00:07:01.443 END TEST bdev_json_nonenclosed 00:07:01.443 ************************************ 00:07:01.443 03:07:04 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.443 03:07:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:01.443 03:07:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.443 03:07:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.443 ************************************ 00:07:01.443 START TEST bdev_json_nonarray 00:07:01.443 ************************************ 00:07:01.443 03:07:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.703 [2024-11-18 03:07:05.068103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.703 [2024-11-18 03:07:05.068219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72949 ] 00:07:01.703 [2024-11-18 03:07:05.218623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.703 [2024-11-18 03:07:05.270899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.703 [2024-11-18 03:07:05.271025] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:01.703 [2024-11-18 03:07:05.271046] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:01.703 [2024-11-18 03:07:05.271060] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.963 00:07:01.963 real 0m0.371s 00:07:01.963 user 0m0.147s 00:07:01.963 sys 0m0.119s 00:07:01.963 03:07:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.963 ************************************ 00:07:01.963 END TEST bdev_json_nonarray 00:07:01.963 ************************************ 00:07:01.963 03:07:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:01.963 03:07:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:01.964 03:07:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:01.964 03:07:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:01.964 00:07:01.964 real 0m30.507s 00:07:01.964 user 0m47.786s 00:07:01.964 sys 0m5.170s 00:07:01.964 ************************************ 00:07:01.964 03:07:05 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.964 03:07:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.964 END TEST blockdev_nvme 00:07:01.964 ************************************ 00:07:01.964 03:07:05 -- spdk/autotest.sh@209 -- # uname -s 00:07:01.964 03:07:05 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:01.964 03:07:05 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:01.964 03:07:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.964 03:07:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.964 03:07:05 -- common/autotest_common.sh@10 -- # set +x 00:07:01.964 ************************************ 00:07:01.964 START TEST blockdev_nvme_gpt 00:07:01.964 ************************************ 00:07:01.964 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:02.225 * Looking for test storage... 00:07:02.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.226 03:07:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:02.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.226 --rc genhtml_branch_coverage=1 00:07:02.226 --rc genhtml_function_coverage=1 00:07:02.226 --rc genhtml_legend=1 00:07:02.226 --rc geninfo_all_blocks=1 00:07:02.226 --rc geninfo_unexecuted_blocks=1 00:07:02.226 00:07:02.226 ' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:02.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.226 --rc genhtml_branch_coverage=1 00:07:02.226 --rc genhtml_function_coverage=1 00:07:02.226 --rc genhtml_legend=1 00:07:02.226 --rc geninfo_all_blocks=1 00:07:02.226 --rc geninfo_unexecuted_blocks=1 00:07:02.226 00:07:02.226 ' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:02.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.226 --rc genhtml_branch_coverage=1 00:07:02.226 --rc genhtml_function_coverage=1 00:07:02.226 --rc genhtml_legend=1 00:07:02.226 --rc geninfo_all_blocks=1 00:07:02.226 --rc geninfo_unexecuted_blocks=1 00:07:02.226 00:07:02.226 ' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:02.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.226 --rc genhtml_branch_coverage=1 00:07:02.226 --rc genhtml_function_coverage=1 00:07:02.226 --rc genhtml_legend=1 00:07:02.226 --rc geninfo_all_blocks=1 00:07:02.226 --rc geninfo_unexecuted_blocks=1 00:07:02.226 00:07:02.226 ' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73027 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73027 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73027 ']' 00:07:02.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.226 03:07:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.226 03:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:02.226 [2024-11-18 03:07:05.728563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.226 [2024-11-18 03:07:05.728704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73027 ] 00:07:02.487 [2024-11-18 03:07:05.880589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.487 [2024-11-18 03:07:05.931654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.058 03:07:06 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.058 03:07:06 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:03.058 03:07:06 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:03.058 03:07:06 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:03.058 03:07:06 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:03.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:03.579 Waiting for block devices as requested 00:07:03.579 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.579 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.840 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.840 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:09.120 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:09.120 BYT; 00:07:09.120 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:09.120 BYT; 00:07:09.120 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:09.120 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:09.120 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:09.121 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:09.121 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:09.121 03:07:12 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:09.121 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:09.121 03:07:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:10.064 The operation has completed successfully. 00:07:10.064 03:07:13 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:11.005 The operation has completed successfully. 00:07:11.005 03:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:11.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.837 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.837 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.837 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.837 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.097 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:12.097 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.097 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.097 [] 00:07:12.097 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.097 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:12.097 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:12.097 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:12.097 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:12.097 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:12.097 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.097 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.356 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:12.356 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:12.357 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9c6a4ada-a714-4792-97b4-399367ecd365"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9c6a4ada-a714-4792-97b4-399367ecd365",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a9e62d3e-c2e1-4f3e-ad23-5a9c52f10328"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9e62d3e-c2e1-4f3e-ad23-5a9c52f10328",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "140d1f05-dc19-4c48-9612-e95354e8a394"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "140d1f05-dc19-4c48-9612-e95354e8a394",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3d38bd6b-565d-459c-8387-b96931bd524c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d38bd6b-565d-459c-8387-b96931bd524c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9d1f7bd8-0def-4f25-8a44-68e3acb78e69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9d1f7bd8-0def-4f25-8a44-68e3acb78e69",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:12.357 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:12.357 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:12.357 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:12.357 03:07:15 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73027 00:07:12.357 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73027 ']' 00:07:12.357 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73027 00:07:12.357 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:12.357 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.616 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73027 00:07:12.616 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.616 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.616 killing process with pid 73027 00:07:12.616 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73027' 00:07:12.616 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73027 00:07:12.616 03:07:15 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73027 00:07:12.878 03:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:12.878 03:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:12.878 03:07:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:12.878 03:07:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.878 03:07:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.878 ************************************ 00:07:12.878 START TEST bdev_hello_world 00:07:12.878 ************************************ 00:07:12.878 03:07:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:12.878 [2024-11-18 03:07:16.316642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.878 [2024-11-18 03:07:16.316790] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73639 ] 00:07:13.140 [2024-11-18 03:07:16.462148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.140 [2024-11-18 03:07:16.506687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.400 [2024-11-18 03:07:16.893605] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:13.400 [2024-11-18 03:07:16.893681] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:13.400 [2024-11-18 03:07:16.893707] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:13.400 [2024-11-18 03:07:16.896270] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:13.400 [2024-11-18 03:07:16.897112] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:13.400 [2024-11-18 03:07:16.897153] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:13.400 [2024-11-18 03:07:16.897532] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:13.400 00:07:13.400 [2024-11-18 03:07:16.897561] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:13.661 00:07:13.661 real 0m0.862s 00:07:13.661 user 0m0.573s 00:07:13.661 sys 0m0.181s 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:13.661 ************************************ 00:07:13.661 END TEST bdev_hello_world 00:07:13.661 ************************************ 00:07:13.661 03:07:17 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:13.661 03:07:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.661 03:07:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.661 03:07:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.661 ************************************ 00:07:13.661 START TEST bdev_bounds 00:07:13.661 ************************************ 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73670 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.661 Process bdevio pid: 73670 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73670' 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73670 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73670 ']' 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.661 03:07:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.661 [2024-11-18 03:07:17.210526] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.661 [2024-11-18 03:07:17.210655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73670 ] 00:07:13.920 [2024-11-18 03:07:17.358951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.920 [2024-11-18 03:07:17.406883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.920 [2024-11-18 03:07:17.406946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.920 [2024-11-18 03:07:17.406975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.863 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.863 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:14.863 03:07:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:14.863 I/O targets: 00:07:14.863 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:14.863 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:14.863 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:14.863 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.863 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.863 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:14.863 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:14.863 00:07:14.863 00:07:14.863 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.863 http://cunit.sourceforge.net/ 00:07:14.863 00:07:14.863 00:07:14.863 Suite: bdevio tests on: Nvme3n1 00:07:14.863 Test: blockdev write read block ...passed 00:07:14.863 Test: blockdev write zeroes read block ...passed 00:07:14.863 Test: blockdev write zeroes read no split ...passed 00:07:14.863 Test: blockdev write zeroes read split ...passed 00:07:14.863 Test: blockdev write zeroes read split partial ...passed 00:07:14.863 Test: blockdev reset ...[2024-11-18 03:07:18.197262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:14.863 [2024-11-18 03:07:18.200540] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.863 passed 00:07:14.863 Test: blockdev write read 8 blocks ...passed 00:07:14.863 Test: blockdev write read size > 128k ...passed 00:07:14.863 Test: blockdev write read invalid size ...passed 00:07:14.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.863 Test: blockdev write read max offset ...passed 00:07:14.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.863 Test: blockdev writev readv 8 blocks ...passed 00:07:14.863 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.863 Test: blockdev writev readv block ...passed 00:07:14.863 Test: blockdev writev readv size > 128k ...passed 00:07:14.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.863 Test: blockdev comparev and writev ...[2024-11-18 03:07:18.210517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2e0e000 len:0x1000 00:07:14.863 [2024-11-18 03:07:18.210563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev nvme passthru rw ...passed 00:07:14.863 Test: blockdev nvme passthru vendor specific ...[2024-11-18 03:07:18.211398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.863 [2024-11-18 03:07:18.211446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev nvme admin passthru ...passed 00:07:14.863 Test: blockdev copy ...passed 00:07:14.863 Suite: bdevio tests on: Nvme2n3 00:07:14.863 Test: blockdev write read block ...passed 00:07:14.863 Test: blockdev write zeroes read block ...passed 00:07:14.863 Test: blockdev write zeroes read no split ...passed 00:07:14.863 Test: blockdev write zeroes read split ...passed 00:07:14.863 Test: blockdev write zeroes read split partial ...passed 00:07:14.863 Test: blockdev reset ...[2024-11-18 03:07:18.233872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:14.863 [2024-11-18 03:07:18.235765] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.863 passed 00:07:14.863 Test: blockdev write read 8 blocks ...passed 00:07:14.863 Test: blockdev write read size > 128k ...passed 00:07:14.863 Test: blockdev write read invalid size ...passed 00:07:14.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.863 Test: blockdev write read max offset ...passed 00:07:14.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.863 Test: blockdev writev readv 8 blocks ...passed 00:07:14.863 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.863 Test: blockdev writev readv block ...passed 00:07:14.863 Test: blockdev writev readv size > 128k ...passed 00:07:14.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.863 Test: blockdev comparev and writev ...[2024-11-18 03:07:18.240464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2e0a000 len:0x1000 00:07:14.863 [2024-11-18 03:07:18.240503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev nvme passthru rw ...passed 00:07:14.863 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.863 Test: blockdev nvme admin passthru ...[2024-11-18 03:07:18.241184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.863 [2024-11-18 03:07:18.241240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev copy ...passed 00:07:14.863 Suite: bdevio tests on: Nvme2n2 00:07:14.863 Test: blockdev write read block ...passed 00:07:14.863 Test: blockdev write zeroes read block ...passed 00:07:14.863 Test: blockdev write zeroes read no split ...passed 00:07:14.863 Test: blockdev write zeroes read split ...passed 00:07:14.863 Test: blockdev write zeroes read split partial ...passed 00:07:14.863 Test: blockdev reset ...[2024-11-18 03:07:18.256884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:14.863 passed 00:07:14.863 Test: blockdev write read 8 blocks ...[2024-11-18 03:07:18.258791] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.863 passed 00:07:14.863 Test: blockdev write read size > 128k ...passed 00:07:14.863 Test: blockdev write read invalid size ...passed 00:07:14.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.863 Test: blockdev write read max offset ...passed 00:07:14.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.863 Test: blockdev writev readv 8 blocks ...passed 00:07:14.863 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.863 Test: blockdev writev readv block ...passed 00:07:14.863 Test: blockdev writev readv size > 128k ...passed 00:07:14.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.863 Test: blockdev comparev and writev ...[2024-11-18 03:07:18.266148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2daa05000 len:0x1000 00:07:14.863 [2024-11-18 03:07:18.266190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev nvme passthru rw ...passed 00:07:14.863 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.863 Test: blockdev nvme admin passthru ...[2024-11-18 03:07:18.266575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.863 [2024-11-18 03:07:18.266598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev copy ...passed 00:07:14.863 Suite: bdevio tests on: Nvme2n1 00:07:14.863 Test: blockdev write read block ...passed 00:07:14.863 Test: blockdev write zeroes read block ...passed 00:07:14.863 Test: blockdev write zeroes read no split ...passed 00:07:14.863 Test: blockdev write zeroes read split ...passed 00:07:14.863 Test: blockdev write zeroes read split partial ...passed 00:07:14.863 Test: blockdev reset ...[2024-11-18 03:07:18.281911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:14.863 [2024-11-18 03:07:18.286991] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.863 passed 00:07:14.863 Test: blockdev write read 8 blocks ...passed 00:07:14.863 Test: blockdev write read size > 128k ...passed 00:07:14.863 Test: blockdev write read invalid size ...passed 00:07:14.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.863 Test: blockdev write read max offset ...passed 00:07:14.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.863 Test: blockdev writev readv 8 blocks ...passed 00:07:14.863 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.863 Test: blockdev writev readv block ...passed 00:07:14.863 Test: blockdev writev readv size > 128k ...passed 00:07:14.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.863 Test: blockdev comparev and writev ...[2024-11-18 03:07:18.291771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2a02000 len:0x1000 00:07:14.863 [2024-11-18 03:07:18.291809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.863 passed 00:07:14.863 Test: blockdev nvme passthru rw ...passed 00:07:14.864 Test: blockdev nvme passthru vendor specific ...[2024-11-18 03:07:18.292854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:14.864 [2024-11-18 03:07:18.292886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:14.864 passed 00:07:14.864 Test: blockdev nvme admin passthru ...passed 00:07:14.864 Test: blockdev copy ...passed 00:07:14.864 Suite: bdevio tests on: Nvme1n1p2 00:07:14.864 Test: blockdev write read block ...passed 00:07:14.864 Test: blockdev write zeroes read block ...passed 00:07:14.864 Test: blockdev write zeroes read no split ...passed 00:07:14.864 Test: blockdev write zeroes read split ...passed 00:07:14.864 Test: blockdev write zeroes read split partial ...passed 00:07:14.864 Test: blockdev reset ...[2024-11-18 03:07:18.320150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:14.864 [2024-11-18 03:07:18.324626] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.864 passed 00:07:14.864 Test: blockdev write read 8 blocks ...passed 00:07:14.864 Test: blockdev write read size > 128k ...passed 00:07:14.864 Test: blockdev write read invalid size ...passed 00:07:14.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.864 Test: blockdev write read max offset ...passed 00:07:14.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.864 Test: blockdev writev readv 8 blocks ...passed 00:07:14.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.864 Test: blockdev writev readv block ...passed 00:07:14.864 Test: blockdev writev readv size > 128k ...passed 00:07:14.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.864 Test: blockdev comparev and writev ...passed 00:07:14.864 Test: blockdev nvme passthru rw ...passed 00:07:14.864 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.864 Test: blockdev nvme admin passthru ...passed 00:07:14.864 Test: blockdev copy ...[2024-11-18 03:07:18.334335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ddc3b000 len:0x1000 00:07:14.864 [2024-11-18 03:07:18.334377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.864 passed 00:07:14.864 Suite: bdevio tests on: Nvme1n1p1 00:07:14.864 Test: blockdev write read block ...passed 00:07:14.864 Test: blockdev write zeroes read block ...passed 00:07:14.864 Test: blockdev write zeroes read no split ...passed 00:07:14.864 Test: blockdev write zeroes read split ...passed 00:07:14.864 Test: blockdev write zeroes read split partial ...passed 00:07:14.864 Test: blockdev reset ...[2024-11-18 03:07:18.348053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:14.864 [2024-11-18 03:07:18.351035] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.864 passed 00:07:14.864 Test: blockdev write read 8 blocks ...passed 00:07:14.864 Test: blockdev write read size > 128k ...passed 00:07:14.864 Test: blockdev write read invalid size ...passed 00:07:14.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.864 Test: blockdev write read max offset ...passed 00:07:14.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.864 Test: blockdev writev readv 8 blocks ...passed 00:07:14.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.864 Test: blockdev writev readv block ...passed 00:07:14.864 Test: blockdev writev readv size > 128k ...passed 00:07:14.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.864 Test: blockdev comparev and writev ...[2024-11-18 03:07:18.367207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ddc37000 len:0x1000 00:07:14.864 [2024-11-18 03:07:18.367251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.864 passed 00:07:14.864 Test: blockdev nvme passthru rw ...passed 00:07:14.864 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.864 Test: blockdev nvme admin passthru ...passed 00:07:14.864 Test: blockdev copy ...passed 00:07:14.864 Suite: bdevio tests on: Nvme0n1 00:07:14.864 Test: blockdev write read block ...passed 00:07:14.864 Test: blockdev write zeroes read block ...passed 00:07:14.864 Test: blockdev write zeroes read no split ...passed 00:07:14.864 Test: blockdev write zeroes read split ...passed 00:07:14.864 Test: blockdev write zeroes read split partial ...passed 00:07:14.864 Test: blockdev reset ...[2024-11-18 03:07:18.385430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:14.864 [2024-11-18 03:07:18.386997] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.864 passed 00:07:14.864 Test: blockdev write read 8 blocks ...passed 00:07:14.864 Test: blockdev write read size > 128k ...passed 00:07:14.864 Test: blockdev write read invalid size ...passed 00:07:14.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.864 Test: blockdev write read max offset ...passed 00:07:14.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.864 Test: blockdev writev readv 8 blocks ...passed 00:07:14.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.864 Test: blockdev writev readv block ...passed 00:07:14.864 Test: blockdev writev readv size > 128k ...passed 00:07:14.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.864 Test: blockdev comparev and writev ...passed 00:07:14.864 Test: blockdev nvme passthru rw ...[2024-11-18 03:07:18.394284] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:14.864 separate metadata which is not supported yet. 00:07:14.864 passed 00:07:14.864 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.864 Test: blockdev nvme admin passthru ...[2024-11-18 03:07:18.395167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:14.864 [2024-11-18 03:07:18.395206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:14.864 passed 00:07:14.864 Test: blockdev copy ...passed 00:07:14.864 00:07:14.864 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.864 suites 7 7 n/a 0 0 00:07:14.864 tests 161 161 161 0 0 00:07:14.864 asserts 1025 1025 1025 0 n/a 00:07:14.864 00:07:14.864 Elapsed time = 0.529 seconds 00:07:14.864 0 00:07:14.864 03:07:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73670 00:07:14.864 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73670 ']' 00:07:14.864 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73670 00:07:14.864 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:14.864 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.864 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73670 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.123 killing process with pid 73670 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73670' 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73670 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73670 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:15.123 00:07:15.123 real 0m1.450s 00:07:15.123 user 0m3.614s 00:07:15.123 sys 0m0.274s 00:07:15.123 ************************************ 00:07:15.123 END TEST bdev_bounds 00:07:15.123 ************************************ 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 03:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:15.123 03:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:15.123 03:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.123 03:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.123 ************************************ 00:07:15.123 START TEST bdev_nbd 00:07:15.123 ************************************ 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73713 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73713 /var/tmp/spdk-nbd.sock 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73713 ']' 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.123 03:07:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:15.382 [2024-11-18 03:07:18.735881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:15.382 [2024-11-18 03:07:18.735998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.382 [2024-11-18 03:07:18.884031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.382 [2024-11-18 03:07:18.926282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.323 1+0 records in 00:07:16.323 1+0 records out 00:07:16.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115901 s, 3.5 MB/s 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.323 03:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.583 1+0 records in 00:07:16.583 1+0 records out 00:07:16.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114118 s, 3.6 MB/s 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.583 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.842 1+0 records in 00:07:16.842 1+0 records out 00:07:16.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137633 s, 3.0 MB/s 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.842 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.101 1+0 records in 00:07:17.101 1+0 records out 00:07:17.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011851 s, 3.5 MB/s 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.101 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.368 1+0 records in 00:07:17.368 1+0 records out 00:07:17.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104002 s, 3.9 MB/s 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.368 03:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.627 1+0 records in 00:07:17.627 1+0 records out 00:07:17.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154119 s, 2.7 MB/s 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.627 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.887 1+0 records in 00:07:17.887 1+0 records out 00:07:17.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131834 s, 3.1 MB/s 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.887 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd0", 00:07:18.147 "bdev_name": "Nvme0n1" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd1", 00:07:18.147 "bdev_name": "Nvme1n1p1" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd2", 00:07:18.147 "bdev_name": "Nvme1n1p2" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd3", 00:07:18.147 "bdev_name": "Nvme2n1" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd4", 00:07:18.147 "bdev_name": "Nvme2n2" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd5", 00:07:18.147 "bdev_name": "Nvme2n3" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd6", 00:07:18.147 "bdev_name": "Nvme3n1" 00:07:18.147 } 00:07:18.147 ]' 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd0", 00:07:18.147 "bdev_name": "Nvme0n1" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd1", 00:07:18.147 "bdev_name": "Nvme1n1p1" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd2", 00:07:18.147 "bdev_name": "Nvme1n1p2" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd3", 00:07:18.147 "bdev_name": "Nvme2n1" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd4", 00:07:18.147 "bdev_name": "Nvme2n2" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd5", 00:07:18.147 "bdev_name": "Nvme2n3" 00:07:18.147 }, 00:07:18.147 { 00:07:18.147 "nbd_device": "/dev/nbd6", 00:07:18.147 "bdev_name": "Nvme3n1" 00:07:18.147 } 00:07:18.147 ]' 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.147 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.407 03:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.668 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.931 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.192 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.452 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:19.453 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:19.453 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:19.453 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:19.453 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.453 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.453 03:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:19.453 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.453 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.453 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.453 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.715 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.977 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:20.239 /dev/nbd0 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.239 1+0 records in 00:07:20.239 1+0 records out 00:07:20.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515864 s, 7.9 MB/s 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.239 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:20.500 /dev/nbd1 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.500 1+0 records in 00:07:20.500 1+0 records out 00:07:20.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889036 s, 4.6 MB/s 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.500 03:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:20.761 /dev/nbd10 00:07:20.761 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:20.761 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:20.761 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:20.761 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.762 1+0 records in 00:07:20.762 1+0 records out 00:07:20.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121128 s, 3.4 MB/s 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.762 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:21.024 /dev/nbd11 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.024 1+0 records in 00:07:21.024 1+0 records out 00:07:21.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109647 s, 3.7 MB/s 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.024 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:21.286 /dev/nbd12 00:07:21.286 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:21.286 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:21.286 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:21.286 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.286 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.287 1+0 records in 00:07:21.287 1+0 records out 00:07:21.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134435 s, 3.0 MB/s 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.287 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:21.547 /dev/nbd13 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.547 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.548 1+0 records in 00:07:21.548 1+0 records out 00:07:21.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117504 s, 3.5 MB/s 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.548 03:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:21.810 /dev/nbd14 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.810 1+0 records in 00:07:21.810 1+0 records out 00:07:21.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116635 s, 3.5 MB/s 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.810 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd0", 00:07:22.072 "bdev_name": "Nvme0n1" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd1", 00:07:22.072 "bdev_name": "Nvme1n1p1" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd10", 00:07:22.072 "bdev_name": "Nvme1n1p2" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd11", 00:07:22.072 "bdev_name": "Nvme2n1" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd12", 00:07:22.072 "bdev_name": "Nvme2n2" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd13", 00:07:22.072 "bdev_name": "Nvme2n3" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd14", 00:07:22.072 "bdev_name": "Nvme3n1" 00:07:22.072 } 00:07:22.072 ]' 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd0", 00:07:22.072 "bdev_name": "Nvme0n1" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd1", 00:07:22.072 "bdev_name": "Nvme1n1p1" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd10", 00:07:22.072 "bdev_name": "Nvme1n1p2" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd11", 00:07:22.072 "bdev_name": "Nvme2n1" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd12", 00:07:22.072 "bdev_name": "Nvme2n2" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd13", 00:07:22.072 "bdev_name": "Nvme2n3" 00:07:22.072 }, 00:07:22.072 { 00:07:22.072 "nbd_device": "/dev/nbd14", 00:07:22.072 "bdev_name": "Nvme3n1" 00:07:22.072 } 00:07:22.072 ]' 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.072 /dev/nbd1 00:07:22.072 /dev/nbd10 00:07:22.072 /dev/nbd11 00:07:22.072 /dev/nbd12 00:07:22.072 /dev/nbd13 00:07:22.072 /dev/nbd14' 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.072 /dev/nbd1 00:07:22.072 /dev/nbd10 00:07:22.072 /dev/nbd11 00:07:22.072 /dev/nbd12 00:07:22.072 /dev/nbd13 00:07:22.072 /dev/nbd14' 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:22.072 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:22.073 256+0 records in 00:07:22.073 256+0 records out 00:07:22.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996051 s, 105 MB/s 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.073 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.334 256+0 records in 00:07:22.334 256+0 records out 00:07:22.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19825 s, 5.3 MB/s 00:07:22.334 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.334 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.595 256+0 records in 00:07:22.595 256+0 records out 00:07:22.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241866 s, 4.3 MB/s 00:07:22.595 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.595 03:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:22.856 256+0 records in 00:07:22.856 256+0 records out 00:07:22.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237872 s, 4.4 MB/s 00:07:22.856 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.856 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:22.856 256+0 records in 00:07:22.856 256+0 records out 00:07:22.856 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136912 s, 7.7 MB/s 00:07:22.856 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.856 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:23.117 256+0 records in 00:07:23.117 256+0 records out 00:07:23.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155836 s, 6.7 MB/s 00:07:23.117 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.117 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:23.117 256+0 records in 00:07:23.118 256+0 records out 00:07:23.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113009 s, 9.3 MB/s 00:07:23.118 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.118 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:23.380 256+0 records in 00:07:23.380 256+0 records out 00:07:23.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200308 s, 5.2 MB/s 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:23.380 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.381 03:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.642 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.643 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.904 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.166 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.167 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:24.167 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:24.428 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:24.428 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:24.428 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.429 03:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.690 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.951 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:25.232 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:25.503 malloc_lvol_verify 00:07:25.503 03:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:25.763 f6f1ef5d-37cf-4dd1-81a9-3f27f22f846b 00:07:25.763 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:26.024 8cd6b3b1-3185-4a5b-bdec-863ce17d1709 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:26.024 /dev/nbd0 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:26.024 mke2fs 1.47.0 (5-Feb-2023) 00:07:26.024 Discarding device blocks: 0/4096 done 00:07:26.024 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:26.024 00:07:26.024 Allocating group tables: 0/1 done 00:07:26.024 Writing inode tables: 0/1 done 00:07:26.024 Creating journal (1024 blocks): done 00:07:26.024 Writing superblocks and filesystem accounting information: 0/1 done 00:07:26.024 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.024 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73713 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73713 ']' 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73713 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73713 00:07:26.285 killing process with pid 73713 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73713' 00:07:26.285 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73713 00:07:26.286 03:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73713 00:07:26.547 ************************************ 00:07:26.547 END TEST bdev_nbd 00:07:26.547 ************************************ 00:07:26.547 03:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:26.547 00:07:26.547 real 0m11.354s 00:07:26.547 user 0m15.945s 00:07:26.547 sys 0m3.966s 00:07:26.547 03:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.547 03:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:26.547 03:07:30 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:26.547 03:07:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:26.547 skipping fio tests on NVMe due to multi-ns failures. 00:07:26.547 03:07:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:26.547 03:07:30 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:26.547 03:07:30 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:26.547 03:07:30 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.547 03:07:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:26.547 03:07:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.547 03:07:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.547 ************************************ 00:07:26.547 START TEST bdev_verify 00:07:26.547 ************************************ 00:07:26.547 03:07:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.808 [2024-11-18 03:07:30.129019] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.808 [2024-11-18 03:07:30.129116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74129 ] 00:07:26.808 [2024-11-18 03:07:30.269306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.808 [2024-11-18 03:07:30.300158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.808 [2024-11-18 03:07:30.300199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.381 Running I/O for 5 seconds... 00:07:29.711 22144.00 IOPS, 86.50 MiB/s [2024-11-18T03:07:33.860Z] 20704.00 IOPS, 80.88 MiB/s [2024-11-18T03:07:35.245Z] 19690.67 IOPS, 76.92 MiB/s [2024-11-18T03:07:36.190Z] 19904.00 IOPS, 77.75 MiB/s [2024-11-18T03:07:36.190Z] 19558.40 IOPS, 76.40 MiB/s 00:07:32.613 Latency(us) 00:07:32.613 [2024-11-18T03:07:36.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.613 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0xbd0bd 00:07:32.613 Nvme0n1 : 5.10 1406.31 5.49 0.00 0.00 90801.24 17543.48 79046.50 00:07:32.613 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:32.613 Nvme0n1 : 5.09 1357.13 5.30 0.00 0.00 94076.14 17140.18 79853.10 00:07:32.613 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0x4ff80 00:07:32.613 Nvme1n1p1 : 5.10 1405.89 5.49 0.00 0.00 90581.69 18047.61 78239.90 00:07:32.613 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:32.613 Nvme1n1p1 : 5.09 1356.66 5.30 0.00 0.00 93991.14 21273.99 75820.11 00:07:32.613 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0x4ff7f 00:07:32.613 Nvme1n1p2 : 5.10 1404.98 5.49 0.00 0.00 90475.53 19156.68 72997.02 00:07:32.613 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:32.613 Nvme1n1p2 : 5.10 1355.79 5.30 0.00 0.00 93882.36 22685.54 71787.13 00:07:32.613 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0x80000 00:07:32.613 Nvme2n1 : 5.11 1403.75 5.48 0.00 0.00 90339.63 20568.22 69770.63 00:07:32.613 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x80000 length 0x80000 00:07:32.613 Nvme2n1 : 5.10 1354.92 5.29 0.00 0.00 93739.68 20870.70 70980.53 00:07:32.613 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0x80000 00:07:32.613 Nvme2n2 : 5.11 1403.02 5.48 0.00 0.00 90227.28 21475.64 72190.42 00:07:32.613 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x80000 length 0x80000 00:07:32.613 Nvme2n2 : 5.10 1354.17 5.29 0.00 0.00 93589.80 19559.98 75013.51 00:07:32.613 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0x80000 00:07:32.613 Nvme2n3 : 5.11 1402.56 5.48 0.00 0.00 90108.44 17140.18 75416.81 00:07:32.613 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x80000 length 0x80000 00:07:32.613 Nvme2n3 : 5.11 1353.81 5.29 0.00 0.00 93419.91 18450.90 78239.90 00:07:32.613 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x0 length 0x20000 00:07:32.613 Nvme3n1 : 5.11 1402.17 5.48 0.00 0.00 89986.79 10687.41 79046.50 00:07:32.613 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.613 Verification LBA range: start 0x20000 length 0x20000 00:07:32.613 Nvme3n1 : 5.11 1353.36 5.29 0.00 0.00 93249.62 10939.47 78643.20 00:07:32.613 [2024-11-18T03:07:36.190Z] =================================================================================================================== 00:07:32.613 [2024-11-18T03:07:36.190Z] Total : 19314.51 75.45 0.00 0.00 92003.09 10687.41 79853.10 00:07:33.186 00:07:33.186 real 0m6.383s 00:07:33.186 user 0m12.034s 00:07:33.186 sys 0m0.193s 00:07:33.186 03:07:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.186 03:07:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 ************************************ 00:07:33.186 END TEST bdev_verify 00:07:33.186 ************************************ 00:07:33.186 03:07:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.186 03:07:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:33.186 03:07:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.186 03:07:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.186 ************************************ 00:07:33.186 START TEST bdev_verify_big_io 00:07:33.186 ************************************ 00:07:33.186 03:07:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.186 [2024-11-18 03:07:36.590997] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.186 [2024-11-18 03:07:36.591144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74222 ] 00:07:33.186 [2024-11-18 03:07:36.743220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.448 [2024-11-18 03:07:36.797244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.448 [2024-11-18 03:07:36.797333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.709 Running I/O for 5 seconds... 00:07:37.690 480.00 IOPS, 30.00 MiB/s [2024-11-18T03:07:41.529Z] 1242.00 IOPS, 77.62 MiB/s [2024-11-18T03:07:43.448Z] 1240.33 IOPS, 77.52 MiB/s [2024-11-18T03:07:43.448Z] 1611.50 IOPS, 100.72 MiB/s [2024-11-18T03:07:43.710Z] 1945.40 IOPS, 121.59 MiB/s 00:07:40.133 Latency(us) 00:07:40.133 [2024-11-18T03:07:43.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.133 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0xbd0b 00:07:40.133 Nvme0n1 : 5.83 103.55 6.47 0.00 0.00 1158193.14 19559.98 1290555.08 00:07:40.133 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:40.133 Nvme0n1 : 5.78 94.19 5.89 0.00 0.00 1292715.58 31860.58 1664816.05 00:07:40.133 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0x4ff8 00:07:40.133 Nvme1n1p1 : 5.83 109.35 6.83 0.00 0.00 1086848.90 103244.41 1174405.12 00:07:40.133 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:40.133 Nvme1n1p1 : 5.89 109.11 6.82 0.00 0.00 1083194.12 98404.82 1116330.14 00:07:40.133 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0x4ff7 00:07:40.133 Nvme1n1p2 : 5.94 111.51 6.97 0.00 0.00 1031393.17 100421.32 987274.63 00:07:40.133 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:40.133 Nvme1n1p2 : 5.78 110.76 6.92 0.00 0.00 1050969.95 107277.39 1058255.16 00:07:40.133 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0x8000 00:07:40.133 Nvme2n1 : 6.00 114.42 7.15 0.00 0.00 982564.10 56865.08 1419610.58 00:07:40.133 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x8000 length 0x8000 00:07:40.133 Nvme2n1 : 5.90 112.80 7.05 0.00 0.00 993374.45 108890.58 1077613.49 00:07:40.133 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0x8000 00:07:40.133 Nvme2n2 : 6.09 113.38 7.09 0.00 0.00 955729.97 58074.98 2064888.12 00:07:40.133 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x8000 length 0x8000 00:07:40.133 Nvme2n2 : 6.03 122.66 7.67 0.00 0.00 891436.20 49807.36 1109877.37 00:07:40.133 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0x8000 00:07:40.133 Nvme2n3 : 6.11 122.37 7.65 0.00 0.00 861687.67 25004.50 2116510.33 00:07:40.133 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x8000 length 0x8000 00:07:40.133 Nvme2n3 : 6.06 130.32 8.15 0.00 0.00 815936.86 27222.65 1135688.47 00:07:40.133 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x0 length 0x2000 00:07:40.133 Nvme3n1 : 6.16 153.51 9.59 0.00 0.00 666890.26 541.93 2155226.98 00:07:40.133 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.133 Verification LBA range: start 0x2000 length 0x2000 00:07:40.133 Nvme3n1 : 6.12 151.37 9.46 0.00 0.00 682478.73 636.46 1148594.02 00:07:40.133 [2024-11-18T03:07:43.710Z] =================================================================================================================== 00:07:40.133 [2024-11-18T03:07:43.710Z] Total : 1659.32 103.71 0.00 0.00 942763.93 541.93 2155226.98 00:07:40.706 00:07:40.706 real 0m7.733s 00:07:40.706 user 0m14.617s 00:07:40.706 sys 0m0.293s 00:07:40.706 03:07:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.706 ************************************ 00:07:40.706 END TEST bdev_verify_big_io 00:07:40.706 ************************************ 00:07:40.706 03:07:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:40.968 03:07:44 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.968 03:07:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:40.968 03:07:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.968 03:07:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.968 ************************************ 00:07:40.968 START TEST bdev_write_zeroes 00:07:40.968 ************************************ 00:07:40.968 03:07:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.968 [2024-11-18 03:07:44.378778] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.968 [2024-11-18 03:07:44.378901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74321 ] 00:07:40.968 [2024-11-18 03:07:44.525397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.229 [2024-11-18 03:07:44.556487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.491 Running I/O for 1 seconds... 00:07:42.436 60032.00 IOPS, 234.50 MiB/s 00:07:42.436 Latency(us) 00:07:42.436 [2024-11-18T03:07:46.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.436 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme0n1 : 1.03 8550.28 33.40 0.00 0.00 14938.24 6024.27 30650.68 00:07:42.436 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme1n1p1 : 1.03 8539.82 33.36 0.00 0.00 14934.42 10939.47 28029.24 00:07:42.436 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme1n1p2 : 1.03 8529.41 33.32 0.00 0.00 14861.27 10183.29 25407.80 00:07:42.436 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme2n1 : 1.03 8519.84 33.28 0.00 0.00 14856.10 11342.77 24197.91 00:07:42.436 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme2n2 : 1.03 8510.32 33.24 0.00 0.00 14851.34 11342.77 24500.38 00:07:42.436 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme2n3 : 1.03 8500.81 33.21 0.00 0.00 14843.73 11241.94 25508.63 00:07:42.436 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:42.436 Nvme3n1 : 1.03 8491.24 33.17 0.00 0.00 14832.88 10183.29 26819.35 00:07:42.436 [2024-11-18T03:07:46.013Z] =================================================================================================================== 00:07:42.436 [2024-11-18T03:07:46.013Z] Total : 59641.72 232.98 0.00 0.00 14874.00 6024.27 30650.68 00:07:42.698 00:07:42.698 real 0m1.845s 00:07:42.698 user 0m1.577s 00:07:42.698 sys 0m0.157s 00:07:42.698 03:07:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.698 03:07:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:42.698 ************************************ 00:07:42.698 END TEST bdev_write_zeroes 00:07:42.698 ************************************ 00:07:42.698 03:07:46 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.698 03:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:42.698 03:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.698 03:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.698 ************************************ 00:07:42.698 START TEST bdev_json_nonenclosed 00:07:42.698 ************************************ 00:07:42.698 03:07:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.960 [2024-11-18 03:07:46.284550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.960 [2024-11-18 03:07:46.284662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74362 ] 00:07:42.960 [2024-11-18 03:07:46.429295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.960 [2024-11-18 03:07:46.465577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.960 [2024-11-18 03:07:46.465670] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:42.960 [2024-11-18 03:07:46.465685] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:42.960 [2024-11-18 03:07:46.465696] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.222 00:07:43.222 real 0m0.327s 00:07:43.222 user 0m0.121s 00:07:43.222 sys 0m0.102s 00:07:43.222 ************************************ 00:07:43.222 03:07:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.222 03:07:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:43.222 END TEST bdev_json_nonenclosed 00:07:43.222 ************************************ 00:07:43.222 03:07:46 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.222 03:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:43.222 03:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.222 03:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.222 ************************************ 00:07:43.222 START TEST bdev_json_nonarray 00:07:43.222 ************************************ 00:07:43.222 03:07:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.222 [2024-11-18 03:07:46.674329] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.222 [2024-11-18 03:07:46.674467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74383 ] 00:07:43.483 [2024-11-18 03:07:46.823304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.483 [2024-11-18 03:07:46.874533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.483 [2024-11-18 03:07:46.874655] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:43.483 [2024-11-18 03:07:46.874676] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:43.483 [2024-11-18 03:07:46.874692] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.483 00:07:43.483 real 0m0.369s 00:07:43.483 user 0m0.157s 00:07:43.483 sys 0m0.107s 00:07:43.483 ************************************ 00:07:43.483 END TEST bdev_json_nonarray 00:07:43.483 ************************************ 00:07:43.483 03:07:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.483 03:07:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:43.483 03:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:43.483 03:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:43.483 03:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:43.483 03:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.483 03:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.483 03:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.483 ************************************ 00:07:43.483 START TEST bdev_gpt_uuid 00:07:43.483 ************************************ 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74403 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74403 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74403 ']' 00:07:43.483 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.744 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.744 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.744 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:43.744 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.744 03:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.744 [2024-11-18 03:07:47.171047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.744 [2024-11-18 03:07:47.171268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74403 ] 00:07:44.005 [2024-11-18 03:07:47.336476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.005 [2024-11-18 03:07:47.388109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.575 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.575 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:44.575 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.575 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.575 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.836 Some configs were skipped because the RPC state that can call them passed over. 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.836 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:44.836 { 00:07:44.836 "name": "Nvme1n1p1", 00:07:44.836 "aliases": [ 00:07:44.836 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:44.836 ], 00:07:44.836 "product_name": "GPT Disk", 00:07:44.836 "block_size": 4096, 00:07:44.836 "num_blocks": 655104, 00:07:44.836 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:44.836 "assigned_rate_limits": { 00:07:44.836 "rw_ios_per_sec": 0, 00:07:44.836 "rw_mbytes_per_sec": 0, 00:07:44.836 "r_mbytes_per_sec": 0, 00:07:44.836 "w_mbytes_per_sec": 0 00:07:44.836 }, 00:07:44.836 "claimed": false, 00:07:44.836 "zoned": false, 00:07:44.836 "supported_io_types": { 00:07:44.836 "read": true, 00:07:44.836 "write": true, 00:07:44.836 "unmap": true, 00:07:44.836 "flush": true, 00:07:44.836 "reset": true, 00:07:44.836 "nvme_admin": false, 00:07:44.836 "nvme_io": false, 00:07:44.836 "nvme_io_md": false, 00:07:44.837 "write_zeroes": true, 00:07:44.837 "zcopy": false, 00:07:44.837 "get_zone_info": false, 00:07:44.837 "zone_management": false, 00:07:44.837 "zone_append": false, 00:07:44.837 "compare": true, 00:07:44.837 "compare_and_write": false, 00:07:44.837 "abort": true, 00:07:44.837 "seek_hole": false, 00:07:44.837 "seek_data": false, 00:07:44.837 "copy": true, 00:07:44.837 "nvme_iov_md": false 00:07:44.837 }, 00:07:44.837 "driver_specific": { 00:07:44.837 "gpt": { 00:07:44.837 "base_bdev": "Nvme1n1", 00:07:44.837 "offset_blocks": 256, 00:07:44.837 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:44.837 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:44.837 "partition_name": "SPDK_TEST_first" 00:07:44.837 } 00:07:44.837 } 00:07:44.837 } 00:07:44.837 ]' 00:07:44.837 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:45.099 { 00:07:45.099 "name": "Nvme1n1p2", 00:07:45.099 "aliases": [ 00:07:45.099 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:45.099 ], 00:07:45.099 "product_name": "GPT Disk", 00:07:45.099 "block_size": 4096, 00:07:45.099 "num_blocks": 655103, 00:07:45.099 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:45.099 "assigned_rate_limits": { 00:07:45.099 "rw_ios_per_sec": 0, 00:07:45.099 "rw_mbytes_per_sec": 0, 00:07:45.099 "r_mbytes_per_sec": 0, 00:07:45.099 "w_mbytes_per_sec": 0 00:07:45.099 }, 00:07:45.099 "claimed": false, 00:07:45.099 "zoned": false, 00:07:45.099 "supported_io_types": { 00:07:45.099 "read": true, 00:07:45.099 "write": true, 00:07:45.099 "unmap": true, 00:07:45.099 "flush": true, 00:07:45.099 "reset": true, 00:07:45.099 "nvme_admin": false, 00:07:45.099 "nvme_io": false, 00:07:45.099 "nvme_io_md": false, 00:07:45.099 "write_zeroes": true, 00:07:45.099 "zcopy": false, 00:07:45.099 "get_zone_info": false, 00:07:45.099 "zone_management": false, 00:07:45.099 "zone_append": false, 00:07:45.099 "compare": true, 00:07:45.099 "compare_and_write": false, 00:07:45.099 "abort": true, 00:07:45.099 "seek_hole": false, 00:07:45.099 "seek_data": false, 00:07:45.099 "copy": true, 00:07:45.099 "nvme_iov_md": false 00:07:45.099 }, 00:07:45.099 "driver_specific": { 00:07:45.099 "gpt": { 00:07:45.099 "base_bdev": "Nvme1n1", 00:07:45.099 "offset_blocks": 655360, 00:07:45.099 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:45.099 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:45.099 "partition_name": "SPDK_TEST_second" 00:07:45.099 } 00:07:45.099 } 00:07:45.099 } 00:07:45.099 ]' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74403 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74403 ']' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74403 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74403 00:07:45.099 killing process with pid 74403 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74403' 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74403 00:07:45.099 03:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74403 00:07:45.674 00:07:45.674 real 0m1.975s 00:07:45.674 user 0m2.111s 00:07:45.674 sys 0m0.446s 00:07:45.674 ************************************ 00:07:45.674 END TEST bdev_gpt_uuid 00:07:45.674 ************************************ 00:07:45.674 03:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.674 03:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:45.674 03:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:45.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.196 Waiting for block devices as requested 00:07:46.196 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.196 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.196 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.457 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.751 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:51.751 03:07:54 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:51.751 03:07:54 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:51.751 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:51.751 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:51.751 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:51.751 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:51.751 03:07:55 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:51.751 00:07:51.751 real 0m49.712s 00:07:51.751 user 1m2.402s 00:07:51.751 sys 0m8.341s 00:07:51.751 03:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.751 ************************************ 00:07:51.751 END TEST blockdev_nvme_gpt 00:07:51.751 ************************************ 00:07:51.751 03:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.751 03:07:55 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:51.751 03:07:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.751 03:07:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.751 03:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:51.751 ************************************ 00:07:51.751 START TEST nvme 00:07:51.751 ************************************ 00:07:51.751 03:07:55 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:52.013 * Looking for test storage... 00:07:52.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.013 03:07:55 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.013 03:07:55 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.013 03:07:55 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.013 03:07:55 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.013 03:07:55 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.013 03:07:55 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:52.013 03:07:55 nvme -- scripts/common.sh@345 -- # : 1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.013 03:07:55 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.013 03:07:55 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@353 -- # local d=1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.013 03:07:55 nvme -- scripts/common.sh@355 -- # echo 1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.013 03:07:55 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@353 -- # local d=2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.013 03:07:55 nvme -- scripts/common.sh@355 -- # echo 2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.013 03:07:55 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.013 03:07:55 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.013 03:07:55 nvme -- scripts/common.sh@368 -- # return 0 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.013 --rc genhtml_branch_coverage=1 00:07:52.013 --rc genhtml_function_coverage=1 00:07:52.013 --rc genhtml_legend=1 00:07:52.013 --rc geninfo_all_blocks=1 00:07:52.013 --rc geninfo_unexecuted_blocks=1 00:07:52.013 00:07:52.013 ' 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.013 --rc genhtml_branch_coverage=1 00:07:52.013 --rc genhtml_function_coverage=1 00:07:52.013 --rc genhtml_legend=1 00:07:52.013 --rc geninfo_all_blocks=1 00:07:52.013 --rc geninfo_unexecuted_blocks=1 00:07:52.013 00:07:52.013 ' 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.013 --rc genhtml_branch_coverage=1 00:07:52.013 --rc genhtml_function_coverage=1 00:07:52.013 --rc genhtml_legend=1 00:07:52.013 --rc geninfo_all_blocks=1 00:07:52.013 --rc geninfo_unexecuted_blocks=1 00:07:52.013 00:07:52.013 ' 00:07:52.013 03:07:55 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:52.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.013 --rc genhtml_branch_coverage=1 00:07:52.013 --rc genhtml_function_coverage=1 00:07:52.013 --rc genhtml_legend=1 00:07:52.013 --rc geninfo_all_blocks=1 00:07:52.013 --rc geninfo_unexecuted_blocks=1 00:07:52.013 00:07:52.013 ' 00:07:52.013 03:07:55 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:52.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.158 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.158 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.158 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.158 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.158 03:07:56 nvme -- nvme/nvme.sh@79 -- # uname 00:07:53.158 03:07:56 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:53.158 03:07:56 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:53.158 03:07:56 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:53.158 Waiting for stub to ready for secondary processes... 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1071 -- # stubpid=75034 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75034 ]] 00:07:53.158 03:07:56 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:53.158 [2024-11-18 03:07:56.625540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.158 [2024-11-18 03:07:56.625867] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:54.102 [2024-11-18 03:07:57.568376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.102 03:07:57 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:54.102 03:07:57 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75034 ]] 00:07:54.102 03:07:57 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:54.102 [2024-11-18 03:07:57.597027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.102 [2024-11-18 03:07:57.597350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.102 [2024-11-18 03:07:57.597434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.102 [2024-11-18 03:07:57.609894] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:54.102 [2024-11-18 03:07:57.609945] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.102 [2024-11-18 03:07:57.624396] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:54.102 [2024-11-18 03:07:57.624663] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:54.102 [2024-11-18 03:07:57.626086] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.102 [2024-11-18 03:07:57.626619] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:54.102 [2024-11-18 03:07:57.626795] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:54.102 [2024-11-18 03:07:57.628392] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.102 [2024-11-18 03:07:57.628916] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:54.102 [2024-11-18 03:07:57.629224] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:54.102 [2024-11-18 03:07:57.631393] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.102 [2024-11-18 03:07:57.631823] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:54.102 [2024-11-18 03:07:57.632112] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:54.102 [2024-11-18 03:07:57.632219] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:54.102 [2024-11-18 03:07:57.632390] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:55.045 03:07:58 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:55.045 done. 00:07:55.045 03:07:58 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:55.045 03:07:58 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:55.045 03:07:58 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:55.045 03:07:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.045 03:07:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.045 ************************************ 00:07:55.045 START TEST nvme_reset 00:07:55.045 ************************************ 00:07:55.045 03:07:58 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:55.305 Initializing NVMe Controllers 00:07:55.305 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:55.305 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:55.305 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:55.305 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:55.305 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:55.305 ************************************ 00:07:55.305 END TEST nvme_reset 00:07:55.305 ************************************ 00:07:55.305 00:07:55.305 real 0m0.206s 00:07:55.305 user 0m0.059s 00:07:55.305 sys 0m0.095s 00:07:55.305 03:07:58 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.305 03:07:58 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:55.305 03:07:58 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:55.305 03:07:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.305 03:07:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.305 03:07:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.569 ************************************ 00:07:55.569 START TEST nvme_identify 00:07:55.569 ************************************ 00:07:55.569 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:55.569 03:07:58 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:55.569 03:07:58 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:55.569 03:07:58 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:55.569 03:07:58 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:55.569 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:55.569 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:55.569 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:55.569 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.570 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:55.570 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:55.570 03:07:58 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:55.570 03:07:58 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:55.570 ===================================================== 00:07:55.570 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:55.570 ===================================================== 00:07:55.570 Controller Capabilities/Features 00:07:55.570 ================================ 00:07:55.570 Vendor ID: 1b36 00:07:55.570 Subsystem Vendor ID: 1af4 00:07:55.570 Serial Number: 12341 00:07:55.570 Model Number: QEMU NVMe Ctrl 00:07:55.570 Firmware Version: 8.0.0 00:07:55.570 Recommended Arb Burst: 6 00:07:55.570 IEEE OUI Identifier: 00 54 52 00:07:55.570 Multi-path I/O 00:07:55.570 May have multiple subsystem ports: No 00:07:55.570 May have multiple controllers: No 00:07:55.570 Associated with SR-IOV VF: No 00:07:55.570 Max Data Transfer Size: 524288 00:07:55.570 Max Number of Namespaces: 256 00:07:55.570 Max Number of I/O Queues: 64 00:07:55.570 NVMe Specification Version (VS): 1.4 00:07:55.570 NVMe Specification Version (Identify): 1.4 00:07:55.570 Maximum Queue Entries: 2048 00:07:55.570 Contiguous Queues Required: Yes 00:07:55.570 Arbitration Mechanisms Supported 00:07:55.570 Weighted Round Robin: Not Supported 00:07:55.570 Vendor Specific: Not Supported 00:07:55.570 Reset Timeout: 7500 ms 00:07:55.570 Doorbell Stride: 4 bytes 00:07:55.570 NVM Subsystem Reset: Not Supported 00:07:55.570 Command Sets Supported 00:07:55.570 NVM Command Set: Supported 00:07:55.570 Boot Partition: Not Supported 00:07:55.570 Memory Page Size Minimum: 4096 bytes 00:07:55.570 Memory Page Size Maximum: 65536 bytes 00:07:55.570 Persistent Memory Region: Not Supported 00:07:55.570 Optional Asynchronous Events Supported 00:07:55.570 Namespace Attribute Notices: Supported 00:07:55.570 Firmware Activation Notices: Not Supported 00:07:55.570 ANA Change Notices: Not Supported 00:07:55.570 PLE Aggregate Log Change Notices: Not Supported 00:07:55.570 LBA Status Info Alert Notices: Not Supported 00:07:55.570 EGE Aggregate Log Change Notices: Not Supported 00:07:55.570 Normal NVM Subsystem Shutdown event: Not Supported 00:07:55.570 Zone Descriptor Change Notices: Not Supported 00:07:55.570 Discovery Log Change Notices: Not Supported 00:07:55.570 Controller Attributes 00:07:55.570 128-bit Host Identifier: Not Supported 00:07:55.570 Non-Operational Permissive Mode: Not Supported 00:07:55.570 NVM Sets: Not Supported 00:07:55.570 Read Recovery Levels: Not Supported 00:07:55.570 Endurance Groups: Not Supported 00:07:55.570 Predictable Latency Mode: Not Supported 00:07:55.570 Traffic Based Keep ALive: Not Supported 00:07:55.570 Namespace Granularity: Not Supported 00:07:55.570 SQ Associations: Not Supported 00:07:55.570 UUID List: Not Supported 00:07:55.570 Multi-Domain Subsystem: Not Supported 00:07:55.570 Fixed Capacity Management: Not Supported 00:07:55.570 Variable Capacity Management: Not Supported 00:07:55.570 Delete Endurance Group: Not Supported 00:07:55.570 Delete NVM Set: Not Supported 00:07:55.570 Extended LBA Formats Supported: Supported 00:07:55.570 Flexible Data Placement Supported: Not Supported 00:07:55.570 00:07:55.570 Controller Memory Buffer Support 00:07:55.570 ================================ 00:07:55.570 Supported: No 00:07:55.570 00:07:55.570 Persistent Memory Region Support 00:07:55.570 ================================ 00:07:55.570 Supported: No 00:07:55.570 00:07:55.570 Admin Command Set Attributes 00:07:55.570 ============================ 00:07:55.570 Security Send/Receive: Not Supported 00:07:55.570 Format NVM: Supported 00:07:55.570 Firmware Activate/Download: Not Supported 00:07:55.570 Namespace Management: Supported 00:07:55.570 Device Self-Test: Not Supported 00:07:55.570 Directives: Supported 00:07:55.570 NVMe-MI: Not Supported 00:07:55.570 Virtualization Management: Not Supported 00:07:55.570 Doorbell Buffer Config: Supported 00:07:55.570 Get LBA Status Capability: Not Supported 00:07:55.570 Command & Feature Lockdown Capability: Not Supported 00:07:55.570 Abort Command Limit: 4 00:07:55.570 Async Event Request Limit: 4 00:07:55.570 Number of Firmware Slots: N/A 00:07:55.570 Firmware Slot 1 Read-Only: N/A 00:07:55.570 Firmware Activation Without Reset: N/A 00:07:55.570 Multiple Update Detection Support: N/A 00:07:55.570 Firmware Update Granularity: No Information Provided 00:07:55.570 Per-Namespace SMART Log: Yes 00:07:55.570 Asymmetric Namespace Access Log Page: Not Supported 00:07:55.570 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:55.570 Command Effects Log Page: Supported 00:07:55.570 Get Log Page Extended Data: Supported 00:07:55.570 Telemetry Log Pages: Not Supported 00:07:55.570 Persistent Event Log Pages: Not Supported 00:07:55.570 Supported Log Pages Log Page: May Support 00:07:55.570 Commands Supported & Effects Log Page: Not Supported 00:07:55.570 Feature Identifiers & Effects Log Page:May Support 00:07:55.570 NVMe-MI Commands & Effects Log Page: May Support 00:07:55.570 Data Area 4 for Telemetry Log: Not Supported 00:07:55.570 Error Log Page Entries Supported: 1 00:07:55.570 Keep Alive: Not Supported 00:07:55.570 00:07:55.570 NVM Command Set Attributes 00:07:55.570 ========================== 00:07:55.570 Submission Queue Entry Size 00:07:55.570 Max: 64 00:07:55.570 Min: 64 00:07:55.570 Completion Queue Entry Size 00:07:55.570 Max: 16 00:07:55.571 Min: 16 00:07:55.571 Number of Namespaces: 256 00:07:55.571 Compare Command: Supported 00:07:55.571 Write Uncorrectable Command: Not Supported 00:07:55.571 Dataset Management Command: Supported 00:07:55.571 Write Zeroes Command: Supported 00:07:55.571 Set Features Save Field: Supported 00:07:55.571 Reservations: Not Supported 00:07:55.571 Timestamp: Supported 00:07:55.571 Copy: Supported 00:07:55.571 Volatile Write Cache: Present 00:07:55.571 Atomic Write Unit (Normal): 1 00:07:55.571 Atomic Write Unit (PFail): 1 00:07:55.571 Atomic Compare & Write Unit: 1 00:07:55.571 Fused Compare & Write: Not Supported 00:07:55.571 Scatter-Gather List 00:07:55.571 SGL Command Set: Supported 00:07:55.571 SGL Keyed: Not Supported 00:07:55.571 SGL Bit Bucket Descriptor: Not Supported 00:07:55.571 SGL Metadata Pointer: Not Supported 00:07:55.571 Oversized SGL: Not Supported 00:07:55.571 SGL Metadata Address: Not Supported 00:07:55.571 SGL Offset: Not Supported 00:07:55.571 Transport SGL Data Block: Not Supported 00:07:55.571 Replay Protected Memory Block: Not Supported 00:07:55.571 00:07:55.571 Firmware Slot Information 00:07:55.571 ========================= 00:07:55.571 Active slot: 1 00:07:55.571 Slot 1 Firmware Revision: 1.0 00:07:55.571 00:07:55.571 00:07:55.571 Commands Supported and Effects 00:07:55.571 ============================== 00:07:55.571 Admin Commands 00:07:55.571 -------------- 00:07:55.571 Delete I/O Submission Queue (00h): Supported 00:07:55.571 Create I/O Submission Queue (01h): Supported 00:07:55.571 Get Log Page (02h): Supported 00:07:55.571 Delete I/O Completion Queue (04h): Supported 00:07:55.571 Create I/O Completion Queue (05h): Supported 00:07:55.571 Identify (06h): Supported 00:07:55.571 Abort (08h): Supported 00:07:55.571 Set Features (09h): Supported 00:07:55.571 Get Features (0Ah): Supported 00:07:55.571 Asynchronous Event Request (0Ch): Supported 00:07:55.571 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:55.571 Directive Send (19h): Supported 00:07:55.571 Directive Receive (1Ah): Supported 00:07:55.571 Virtualization Management (1Ch): Supported 00:07:55.571 Doorbell Buffer Config (7Ch): Supported 00:07:55.571 Format NVM (80h): Supported LBA-Change 00:07:55.571 I/O Commands 00:07:55.571 ------------ 00:07:55.571 Flush (00h): Supported LBA-Change 00:07:55.571 Write (01h): Supported LBA-Change 00:07:55.571 Read (02h): Supported 00:07:55.571 Compare (05h): Supported 00:07:55.571 Write Zeroes (08h): Supported LBA-Change 00:07:55.571 Dataset Management (09h): Supported LBA-Change 00:07:55.571 Unknown (0Ch): Supported 00:07:55.571 Unknown (12h): Supported 00:07:55.571 Copy (19h): Supported LBA-Change 00:07:55.571 Unknown (1Dh): Supported LBA-Change 00:07:55.571 00:07:55.571 Error Log 00:07:55.571 ========= 00:07:55.571 00:07:55.571 Arbitration 00:07:55.571 =========== 00:07:55.571 Arbitration Burst: no limit 00:07:55.571 00:07:55.571 Power Management 00:07:55.571 ================ 00:07:55.571 Number of Power States: 1 00:07:55.571 Current Power State: Power State #0 00:07:55.571 Power State #0: 00:07:55.571 Max Power: 25.00 W 00:07:55.571 Non-Operational State: Operational 00:07:55.571 Entry Latency: 16 microseconds 00:07:55.571 Exit Latency: 4 microseconds 00:07:55.571 Relative Read Throughput: 0 00:07:55.571 Relative Read Latency: 0 00:07:55.571 Relative Write Throughput: 0 00:07:55.571 Relative Write Latency: 0 00:07:55.571 Idle Power: Not Reported 00:07:55.571 Active Power: Not Reported 00:07:55.571 Non-Operational Permissive Mode: Not Supported 00:07:55.571 00:07:55.571 Health Information 00:07:55.571 ================== 00:07:55.571 Critical Warnings: 00:07:55.571 Available Spare Space: OK 00:07:55.571 Temperature: OK 00:07:55.571 Device Reliability: OK 00:07:55.571 Read Only: No 00:07:55.571 Volatile Memory Backup: OK 00:07:55.571 Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.571 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:55.571 Available Spare: 0% 00:07:55.571 Available Spare Threshold: 0% 00:07:55.571 Life Percentage Used: 0% 00:07:55.571 Data Units Read: 1054 00:07:55.571 Data Units Written: 921 00:07:55.571 Host Read Commands: 53796 00:07:55.571 Host Write Commands: 52566 00:07:55.571 Controller Busy Time: 0 minutes 00:07:55.571 Power Cycles: 0 00:07:55.571 Power On Hours: 0 hours 00:07:55.571 Unsafe Shutdowns: 0 00:07:55.571 Unrecoverable Media Errors: 0 00:07:55.571 Lifetime Error Log Entries: 0 00:07:55.571 Warning Temperature Time: 0 minutes 00:07:55.571 Critical Temperature Time: 0 minutes 00:07:55.571 00:07:55.571 Number of Queues 00:07:55.571 ================ 00:07:55.571 Number of I/O Submission Queues: 64 00:07:55.571 Number of I/O Completion Queues: 64 00:07:55.571 00:07:55.571 ZNS Specific Controller Data 00:07:55.571 ============================ 00:07:55.571 Zone Append Size Limit: 0 00:07:55.571 00:07:55.571 00:07:55.571 Active Namespaces 00:07:55.571 ================= 00:07:55.571 Namespace ID:1 00:07:55.571 Error Recovery Timeout: Unlimited 00:07:55.571 Command Set Identifier: NVM (00h) 00:07:55.571 Deallocate: Supported 00:07:55.571 Deallocated/Unwritten Error: Supported 00:07:55.571 Deallocated Read Value: All 0x00 00:07:55.571 Deallocate in Write Zeroes: Not Supported 00:07:55.572 Deallocated Guard Field: 0xFFFF 00:07:55.572 Flush: Supported 00:07:55.572 Reservation: Not Supported 00:07:55.572 Namespace Sharing Capabilities: Private 00:07:55.572 Size (in LBAs): 1310720 (5GiB) 00:07:55.572 Capacity (in LBAs): 1310720 (5GiB) 00:07:55.572 Utilization (in LBAs): 1310720 (5GiB) 00:07:55.572 Thin Provisioning: Not Supported 00:07:55.572 Per-NS Atomic Units: No 00:07:55.572 Maximum Single Source Range Length: 128 00:07:55.572 Maximum Copy Length: 128 00:07:55.572 Maximum Source Range Count: 128 00:07:55.572 NGUID/EUI64 Never Reused: No 00:07:55.572 Namespace Write Protected: No 00:07:55.572 Number of LBA Formats: 8 00:07:55.572 Current LBA Format: LBA Format #04 00:07:55.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.572 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.572 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.572 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.572 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.572 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.572 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.572 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.572 00:07:55.572 NVM Specific Namespace Data 00:07:55.572 =========================== 00:07:55.572 Logical Block Storage Tag Mask: 0 00:07:55.572 Protection Information Capabilities: 00:07:55.572 16b Guard Protection Information Storage Tag Support: No 00:07:55.572 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.572 Storage Tag Check Read Support: No 00:07:55.572 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.572 ===================================================== 00:07:55.572 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:55.572 ===================================================== 00:07:55.572 Controller Capabilities/Features 00:07:55.572 ================================ 00:07:55.572 Vendor ID: 1b36 00:07:55.572 Subsystem Vendor ID: 1af4 00:07:55.572 Serial Number: 12343 00:07:55.572 Model Number: QEMU NVMe Ctrl 00:07:55.572 Firmware Version: 8.0.0 00:07:55.572 Recommended Arb Burst: 6 00:07:55.572 IEEE OUI Identifier: 00 54 52 00:07:55.572 Multi-path I/O 00:07:55.572 May have multiple subsystem ports: No 00:07:55.572 May have multiple controllers: Yes 00:07:55.572 Associated with SR-IOV VF: No 00:07:55.572 Max Data Transfer Size: 524288 00:07:55.572 Max Number of Namespaces: 256 00:07:55.572 Max Number of I/O Queues: 64 00:07:55.572 NVMe Specification Version (VS): 1.4 00:07:55.572 NVMe Specification Version (Identify): 1.4 00:07:55.572 Maximum Queue Entries: 2048 00:07:55.572 Contiguous Queues Required: Yes 00:07:55.572 Arbitration Mechanisms Supported 00:07:55.572 Weighted Round Robin: Not Supported 00:07:55.572 Vendor Specific: Not Supported 00:07:55.572 Reset Timeout: 7500 ms 00:07:55.572 Doorbell Stride: 4 bytes 00:07:55.572 NVM Subsystem Reset: Not Supported 00:07:55.572 Command Sets Supported 00:07:55.572 NVM Command Set: Supported 00:07:55.572 Boot Partition: Not Supported 00:07:55.572 Memory Page Size Minimum: 4096 bytes 00:07:55.572 Memory Page Size Maximum: 65536 bytes 00:07:55.572 Persistent Memory Region: Not Supported 00:07:55.572 Optional Asynchronous Events Supported 00:07:55.572 Namespace Attribute Notices: Supported 00:07:55.572 Firmware Activation Notices: Not Supported 00:07:55.572 ANA Change Notices: Not Supported 00:07:55.572 PLE Aggregate Log Change Notices: Not Supported 00:07:55.572 LBA Status Info Alert Notices: Not Supported 00:07:55.572 EGE Aggregate Log Change Notices: Not Supported 00:07:55.572 Normal NVM Subsystem Shutdown event: Not Supported 00:07:55.572 Zone Descriptor Change Notices: Not Supported 00:07:55.572 Discovery Log Change Notices: Not Supported 00:07:55.572 Controller Attributes 00:07:55.572 128-bit Host Identifier: Not Supported 00:07:55.572 Non-Operational Permissive Mode: Not Supported 00:07:55.572 NVM Sets: Not Supported 00:07:55.572 Read Recovery Levels: Not Supported 00:07:55.572 Endurance Groups: Supported 00:07:55.572 Predictable Latency Mode: Not Supported 00:07:55.572 Traffic Based Keep ALive: Not Supported 00:07:55.572 Namespace Granularity: Not Supported 00:07:55.572 SQ Associations: Not Supported 00:07:55.572 UUID List: Not Supported 00:07:55.572 Multi-Domain Subsystem: Not Supported 00:07:55.572 Fixed Capacity Management: Not Supported 00:07:55.572 Variable Capacity Management: Not Supported 00:07:55.572 Delete Endurance Group: Not Supported 00:07:55.572 Delete NVM Set: Not Supported 00:07:55.572 Extended LBA Formats Supported: Supported 00:07:55.572 Flexible Data Placement Supported: Supported 00:07:55.572 00:07:55.572 Controller Memory Buffer Support 00:07:55.572 ================================ 00:07:55.572 Supported: No 00:07:55.572 00:07:55.572 Persistent Memory Region Support 00:07:55.572 ================================ 00:07:55.572 Supported: No 00:07:55.572 00:07:55.572 Admin Command Set Attributes 00:07:55.572 ============================ 00:07:55.572 Security Send/Receive: Not Supported 00:07:55.572 Format NVM: Supported 00:07:55.573 Firmware Activate/Download: Not Supported 00:07:55.573 Namespace Management: Supported 00:07:55.573 Device Self-Test: Not Supported 00:07:55.573 Directives: Supported 00:07:55.573 NVMe-MI: Not Supported 00:07:55.573 Virtualization Management: Not Supported 00:07:55.573 Doorbell Buffer Config: Supported 00:07:55.573 Get LBA Status Capability: Not Supported 00:07:55.573 Command & Feature Lockdown Capability: Not Supported 00:07:55.573 Abort Command Limit: 4 00:07:55.573 Async Event Request Limit: 4 00:07:55.573 Number of Firmware Slots: N/A 00:07:55.573 Firmware Slot 1 Read-Only: N/A 00:07:55.573 Firmware Activation Without Reset: N/A 00:07:55.573 Multiple Update Detection Support: N/A 00:07:55.573 Firmware Update Granularity: No Information Provided 00:07:55.573 Per-Namespace SMART Log: Yes 00:07:55.573 Asymmetric Namespace Access Log Page: Not Supported 00:07:55.573 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:55.573 Command Effects Log Page: Supported 00:07:55.573 Get Log Page Extended Data: Supported 00:07:55.573 Telemetry Log Pages: Not Supported 00:07:55.573 Persistent Event Log Pages: Not Supported 00:07:55.573 Supported Log Pages Log Page: May Support 00:07:55.573 Commands Supported & Effects Log Page: Not Supported 00:07:55.573 Feature Identifiers & Effects Log Page:May Support 00:07:55.573 NVMe-MI Commands & Effects Log Page: May Support 00:07:55.573 Data Area 4 for Telemetry Log: Not Supported 00:07:55.573 Error Log Page Entries Supported: 1 00:07:55.573 Keep Alive: Not Supported 00:07:55.573 00:07:55.573 NVM Command Set Attributes 00:07:55.573 ========================== 00:07:55.573 Submission Queue Entry Size 00:07:55.573 Max: 64 00:07:55.573 Min: 64 00:07:55.573 Completion Queue Entry Size 00:07:55.573 Max: 16 00:07:55.573 Min: 16 00:07:55.573 Number of Namespaces: 256 00:07:55.573 Compare Command: Supported 00:07:55.573 Write Uncorrectable Command: Not Supported 00:07:55.573 Dataset Management Command: Supported 00:07:55.573 Write Zeroes Command: Supported 00:07:55.573 Set Features Save Field: Supported 00:07:55.573 Reservations: Not Supported 00:07:55.573 Timestamp: Supported 00:07:55.573 Copy: Supported 00:07:55.573 Volatile Write Cache: Present 00:07:55.573 Atomic Write Unit (Normal): 1 00:07:55.573 Atomic Write Unit (PFail): 1 00:07:55.573 Atomic Compare & Write Unit: 1 00:07:55.573 Fused Compare & Write: Not Supported 00:07:55.573 Scatter-Gather List 00:07:55.573 SGL Command Set: Supported 00:07:55.573 SGL Keyed: Not Supported 00:07:55.573 SGL Bit Bucket Descriptor: Not Supported 00:07:55.573 SGL Metadata Pointer: Not Supported 00:07:55.573 Oversized SGL: Not Supported 00:07:55.573 SGL Metadata Address: Not Supported 00:07:55.573 SGL Offset: Not Supported 00:07:55.573 Transport SGL Data Block: Not Supported 00:07:55.573 Replay Protected Memory Block: Not Supported 00:07:55.573 00:07:55.573 Firmware Slot Information 00:07:55.573 ========================= 00:07:55.573 Active slot: 1 00:07:55.573 Slot 1 Firmware Revision: 1.0 00:07:55.573 00:07:55.573 00:07:55.573 Commands Supported and Effects 00:07:55.573 ============================== 00:07:55.573 Admin Commands 00:07:55.573 -------------- 00:07:55.573 Delete I/O Submission Queue (00h): Supported 00:07:55.573 Create I/O Submission Queue (01h): Supported 00:07:55.573 Get Log Page (02h): Supported 00:07:55.573 Delete I/O Completion Queue (04h): Supported 00:07:55.573 Create I/O Completion Queue (05h): Supported 00:07:55.573 Identify (06h): Supported 00:07:55.573 Abort (08h): Supported 00:07:55.573 Set Features (09h): Supported 00:07:55.573 Get Features (0Ah): Supported 00:07:55.573 Asynchronous Event Request (0Ch): Supported 00:07:55.573 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:55.573 Directive Send (19h): Supported 00:07:55.573 Directive Receive (1Ah): Supported 00:07:55.573 Virtualization Management (1Ch): Supported 00:07:55.573 Doorbell Buffer Config (7Ch): Supported 00:07:55.573 Format NVM (80h): Supported LBA-Change 00:07:55.573 I/O Commands 00:07:55.573 ------------ 00:07:55.573 Flush (00h): Supported LBA-Change 00:07:55.573 Write (01h): Supported LBA-Change 00:07:55.573 Read (02h): Supported 00:07:55.573 Compare (05h): Supported 00:07:55.573 Write Zeroes (08h): Supported LBA-Change 00:07:55.573 Dataset Management (09h): Supported LBA-Change 00:07:55.573 Unknown (0Ch): Supported 00:07:55.573 Unknown (12h): Supported 00:07:55.573 Copy (19h): Supported LBA-Change 00:07:55.573 Unknown (1Dh): Supported LBA-Change 00:07:55.573 00:07:55.573 Error Log 00:07:55.573 ========= 00:07:55.573 00:07:55.573 Arbitration 00:07:55.573 =========== 00:07:55.573 Arbitration Burst: no limit 00:07:55.573 00:07:55.573 Power Management 00:07:55.573 ================ 00:07:55.573 Number of Power States: 1 00:07:55.573 Current Power State: Power State #0 00:07:55.573 Power State #0: 00:07:55.573 Max Power: 25.00 W 00:07:55.573 Non-Operational State: Operational 00:07:55.573 Entry Latency: 16 microseconds 00:07:55.573 Exit Latency: 4 microseconds 00:07:55.573 Relative Read Throughput: 0 00:07:55.573 Relative Read Latency: 0 00:07:55.573 Relative Write Throughput: 0 00:07:55.573 Relative Write Latency: 0 00:07:55.573 Idle Power: Not Reported 00:07:55.573 Active Power: Not Reported 00:07:55.573 Non-Operational Permissive Mode: Not Supported 00:07:55.573 00:07:55.574 Health Information 00:07:55.574 ================== 00:07:55.574 Critical Warnings: 00:07:55.574 Available Spare Space: OK 00:07:55.574 Temperature: OK 00:07:55.574 Device Reliability: OK 00:07:55.574 Read Only: No 00:07:55.574 Volatile Memory Backup: OK 00:07:55.574 Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.574 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:55.574 Available Spare: 0% 00:07:55.574 Available Spare Threshold: 0% 00:07:55.574 Life Percentage Used: 0% 00:07:55.574 Data Units Read: 830 00:07:55.574 Data Units Written: 759 00:07:55.574 Host Read Commands: 38542 00:07:55.574 Host Write Commands: 37965 00:07:55.574 Controller Busy Time: 0 minutes 00:07:55.574 Power Cycles: 0 00:07:55.574 Power On Hours: 0 hours 00:07:55.574 Unsafe Shutdowns: 0 00:07:55.574 Unrecoverable Media Errors: 0 00:07:55.574 Lifetime Error Log Entries: 0 00:07:55.574 Warning Temperature Time: 0 minutes 00:07:55.574 Critical Temperature Time: 0 minutes 00:07:55.574 00:07:55.574 Number of Queues 00:07:55.574 ================ 00:07:55.574 Number of I/O Submission Queues: 64 00:07:55.574 Number of I/O Completion Queues: 64 00:07:55.574 00:07:55.574 ZNS Specific Controller Data 00:07:55.574 ============================ 00:07:55.574 Zone Append Size Limit: 0 00:07:55.574 00:07:55.574 00:07:55.574 Active Namespaces 00:07:55.574 ================= 00:07:55.574 Namespace ID:1 00:07:55.574 Error Recovery Timeout: Unlimited 00:07:55.574 Command Set Identifier: NVM (00h) 00:07:55.574 Deallocate: Supported 00:07:55.574 Deallocated/Unwritten Error: Supported 00:07:55.574 Deallocated Read Value: All 0x00 00:07:55.574 Deallocate in Write Zeroes: Not Supported 00:07:55.574 Deallocated Guard Field: 0xFFFF 00:07:55.574 Flush: Supported 00:07:55.574 Reservation: Not Supported 00:07:55.574 Namespace Sharing Capabilities: Multiple Controllers 00:07:55.574 Size (in LBAs): 262144 (1GiB) 00:07:55.574 Capacity (in LBAs): 262144 (1GiB) 00:07:55.574 Utilization (in LBAs): 262144 (1GiB) 00:07:55.574 Thin Provisioning: Not Supported 00:07:55.574 Per-NS Atomic Units: No 00:07:55.574 Maximum Single Source Range Length: 128 00:07:55.574 Maximum Copy Length: 128 00:07:55.574 Maximum Source Range Count: 128 00:07:55.574 NGUID/EUI64 Never Reused: No 00:07:55.574 Namespace Write Protected: No 00:07:55.574 Endurance group ID: 1 00:07:55.574 Number of LBA Formats: 8 00:07:55.574 Current LBA Format: LBA Format #04 00:07:55.574 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.574 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.574 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.574 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.574 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.574 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.574 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.574 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.574 00:07:55.574 Get Feature FDP: 00:07:55.574 ================ 00:07:55.574 Enabled: Yes 00:07:55.574 FDP configuration index: 0 00:07:55.574 00:07:55.574 FDP configurations log page 00:07:55.574 =========================== 00:07:55.574 Number of FDP configurations: 1 00:07:55.574 Version: 0 00:07:55.574 Size: 112 00:07:55.574 FDP Configuration Descriptor: 0 00:07:55.574 Descriptor Size: 96 00:07:55.574 Reclaim Group Identifier format: 2 00:07:55.574 FDP Volatile Write Cache: Not Present 00:07:55.574 FDP Configuration: Valid 00:07:55.574 Vendor Specific Size: 0 00:07:55.574 Number of Reclaim Groups: 2 00:07:55.574 Number of Recalim Unit Handles: 8 00:07:55.574 Max Placement Identifiers: 128 00:07:55.574 Number of Namespaces Suppprted: 256 00:07:55.574 Reclaim unit Nominal Size: 6000000 bytes 00:07:55.574 Estimated Reclaim Unit Time Limit: Not Reported 00:07:55.574 RUH Desc #000: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #001: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #002: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #003: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #004: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #005: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #006: RUH Type: Initially Isolated 00:07:55.574 RUH Desc #007: RUH Type: Initially Isolated 00:07:55.574 00:07:55.574 FDP reclaim unit handle usage log page 00:07:55.574 ====================================== 00:07:55.574 Number of Reclaim Unit Handles: 8 00:07:55.574 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:55.574 RUH Usage Desc #001: RUH Attributes: Unused 00:07:55.574 RUH Usage Desc #002: RUH Attributes: Unused 00:07:55.574 RUH Usage Desc #003: RUH Attributes: Unused 00:07:55.574 RUH Usage Desc #004: RUH Attributes: Unused 00:07:55.574 R[2024-11-18 03:07:59.111110] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75067 terminated unexpected 00:07:55.574 [2024-11-18 03:07:59.112487] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75067 terminated unexpected 00:07:55.574 [2024-11-18 03:07:59.114721] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75067 terminated unexpected 00:07:55.574 UH Usage Desc #005: RUH Attributes: Unused 00:07:55.574 RUH Usage Desc #006: RUH Attributes: Unused 00:07:55.574 RUH Usage Desc #007: RUH Attributes: Unused 00:07:55.574 00:07:55.574 FDP statistics log page 00:07:55.574 ======================= 00:07:55.574 Host bytes with metadata written: 481206272 00:07:55.574 Media bytes with metadata written: 481259520 00:07:55.574 Media bytes erased: 0 00:07:55.574 00:07:55.574 FDP events log page 00:07:55.574 =================== 00:07:55.574 Number of FDP events: 0 00:07:55.574 00:07:55.575 NVM Specific Namespace Data 00:07:55.575 =========================== 00:07:55.575 Logical Block Storage Tag Mask: 0 00:07:55.575 Protection Information Capabilities: 00:07:55.575 16b Guard Protection Information Storage Tag Support: No 00:07:55.575 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.575 Storage Tag Check Read Support: No 00:07:55.575 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.575 ===================================================== 00:07:55.575 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:55.575 ===================================================== 00:07:55.575 Controller Capabilities/Features 00:07:55.575 ================================ 00:07:55.575 Vendor ID: 1b36 00:07:55.575 Subsystem Vendor ID: 1af4 00:07:55.575 Serial Number: 12340 00:07:55.575 Model Number: QEMU NVMe Ctrl 00:07:55.575 Firmware Version: 8.0.0 00:07:55.575 Recommended Arb Burst: 6 00:07:55.575 IEEE OUI Identifier: 00 54 52 00:07:55.575 Multi-path I/O 00:07:55.575 May have multiple subsystem ports: No 00:07:55.575 May have multiple controllers: No 00:07:55.575 Associated with SR-IOV VF: No 00:07:55.575 Max Data Transfer Size: 524288 00:07:55.575 Max Number of Namespaces: 256 00:07:55.575 Max Number of I/O Queues: 64 00:07:55.575 NVMe Specification Version (VS): 1.4 00:07:55.575 NVMe Specification Version (Identify): 1.4 00:07:55.575 Maximum Queue Entries: 2048 00:07:55.575 Contiguous Queues Required: Yes 00:07:55.575 Arbitration Mechanisms Supported 00:07:55.575 Weighted Round Robin: Not Supported 00:07:55.575 Vendor Specific: Not Supported 00:07:55.575 Reset Timeout: 7500 ms 00:07:55.575 Doorbell Stride: 4 bytes 00:07:55.575 NVM Subsystem Reset: Not Supported 00:07:55.575 Command Sets Supported 00:07:55.575 NVM Command Set: Supported 00:07:55.575 Boot Partition: Not Supported 00:07:55.575 Memory Page Size Minimum: 4096 bytes 00:07:55.575 Memory Page Size Maximum: 65536 bytes 00:07:55.575 Persistent Memory Region: Not Supported 00:07:55.575 Optional Asynchronous Events Supported 00:07:55.575 Namespace Attribute Notices: Supported 00:07:55.575 Firmware Activation Notices: Not Supported 00:07:55.575 ANA Change Notices: Not Supported 00:07:55.575 PLE Aggregate Log Change Notices: Not Supported 00:07:55.575 LBA Status Info Alert Notices: Not Supported 00:07:55.575 EGE Aggregate Log Change Notices: Not Supported 00:07:55.575 Normal NVM Subsystem Shutdown event: Not Supported 00:07:55.575 Zone Descriptor Change Notices: Not Supported 00:07:55.575 Discovery Log Change Notices: Not Supported 00:07:55.575 Controller Attributes 00:07:55.575 128-bit Host Identifier: Not Supported 00:07:55.575 Non-Operational Permissive Mode: Not Supported 00:07:55.575 NVM Sets: Not Supported 00:07:55.575 Read Recovery Levels: Not Supported 00:07:55.575 Endurance Groups: Not Supported 00:07:55.575 Predictable Latency Mode: Not Supported 00:07:55.575 Traffic Based Keep ALive: Not Supported 00:07:55.575 Namespace Granularity: Not Supported 00:07:55.575 SQ Associations: Not Supported 00:07:55.575 UUID List: Not Supported 00:07:55.575 Multi-Domain Subsystem: Not Supported 00:07:55.575 Fixed Capacity Management: Not Supported 00:07:55.575 Variable Capacity Management: Not Supported 00:07:55.575 Delete Endurance Group: Not Supported 00:07:55.575 Delete NVM Set: Not Supported 00:07:55.575 Extended LBA Formats Supported: Supported 00:07:55.575 Flexible Data Placement Supported: Not Supported 00:07:55.575 00:07:55.575 Controller Memory Buffer Support 00:07:55.575 ================================ 00:07:55.575 Supported: No 00:07:55.575 00:07:55.575 Persistent Memory Region Support 00:07:55.575 ================================ 00:07:55.575 Supported: No 00:07:55.575 00:07:55.575 Admin Command Set Attributes 00:07:55.575 ============================ 00:07:55.575 Security Send/Receive: Not Supported 00:07:55.575 Format NVM: Supported 00:07:55.575 Firmware Activate/Download: Not Supported 00:07:55.575 Namespace Management: Supported 00:07:55.575 Device Self-Test: Not Supported 00:07:55.575 Directives: Supported 00:07:55.575 NVMe-MI: Not Supported 00:07:55.575 Virtualization Management: Not Supported 00:07:55.575 Doorbell Buffer Config: Supported 00:07:55.575 Get LBA Status Capability: Not Supported 00:07:55.575 Command & Feature Lockdown Capability: Not Supported 00:07:55.575 Abort Command Limit: 4 00:07:55.575 Async Event Request Limit: 4 00:07:55.575 Number of Firmware Slots: N/A 00:07:55.575 Firmware Slot 1 Read-Only: N/A 00:07:55.575 Firmware Activation Without Reset: N/A 00:07:55.575 Multiple Update Detection Support: N/A 00:07:55.575 Firmware Update Granularity: No Information Provided 00:07:55.575 Per-Namespace SMART Log: Yes 00:07:55.575 Asymmetric Namespace Access Log Page: Not Supported 00:07:55.575 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:55.575 Command Effects Log Page: Supported 00:07:55.575 Get Log Page Extended Data: Supported 00:07:55.575 Telemetry Log Pages: Not Supported 00:07:55.575 Persistent Event Log Pages: Not Supported 00:07:55.575 Supported Log Pages Log Page: May Support 00:07:55.576 Commands Supported & Effects Log Page: Not Supported 00:07:55.576 Feature Identifiers & Effects Log Page:May Support 00:07:55.576 NVMe-MI Commands & Effects Log Page: May Support 00:07:55.576 Data Area 4 for Telemetry Log: Not Supported 00:07:55.576 Error Log Page Entries Supported: 1 00:07:55.576 Keep Alive: Not Supported 00:07:55.576 00:07:55.576 NVM Command Set Attributes 00:07:55.576 ========================== 00:07:55.576 Submission Queue Entry Size 00:07:55.576 Max: 64 00:07:55.576 Min: 64 00:07:55.576 Completion Queue Entry Size 00:07:55.576 Max: 16 00:07:55.576 Min: 16 00:07:55.576 Number of Namespaces: 256 00:07:55.576 Compare Command: Supported 00:07:55.576 Write Uncorrectable Command: Not Supported 00:07:55.576 Dataset Management Command: Supported 00:07:55.576 Write Zeroes Command: Supported 00:07:55.576 Set Features Save Field: Supported 00:07:55.576 Reservations: Not Supported 00:07:55.576 Timestamp: Supported 00:07:55.576 Copy: Supported 00:07:55.576 Volatile Write Cache: Present 00:07:55.576 Atomic Write Unit (Normal): 1 00:07:55.576 Atomic Write Unit (PFail): 1 00:07:55.576 Atomic Compare & Write Unit: 1 00:07:55.576 Fused Compare & Write: Not Supported 00:07:55.576 Scatter-Gather List 00:07:55.576 SGL Command Set: Supported 00:07:55.576 SGL Keyed: Not Supported 00:07:55.576 SGL Bit Bucket Descriptor: Not Supported 00:07:55.576 SGL Metadata Pointer: Not Supported 00:07:55.576 Oversized SGL: Not Supported 00:07:55.576 SGL Metadata Address: Not Supported 00:07:55.576 SGL Offset: Not Supported 00:07:55.576 Transport SGL Data Block: Not Supported 00:07:55.576 Replay Protected Memory Block: Not Supported 00:07:55.576 00:07:55.576 Firmware Slot Information 00:07:55.576 ========================= 00:07:55.576 Active slot: 1 00:07:55.576 Slot 1 Firmware Revision: 1.0 00:07:55.576 00:07:55.576 00:07:55.576 Commands Supported and Effects 00:07:55.576 ============================== 00:07:55.576 Admin Commands 00:07:55.576 -------------- 00:07:55.576 Delete I/O Submission Queue (00h): Supported 00:07:55.576 Create I/O Submission Queue (01h): Supported 00:07:55.576 Get Log Page (02h): Supported 00:07:55.576 Delete I/O Completion Queue (04h): Supported 00:07:55.576 Create I/O Completion Queue (05h): Supported 00:07:55.576 Identify (06h): Supported 00:07:55.576 Abort (08h): Supported 00:07:55.576 Set Features (09h): Supported 00:07:55.576 Get Features (0Ah): Supported 00:07:55.576 Asynchronous Event Request (0Ch): Supported 00:07:55.576 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:55.576 Directive Send (19h): Supported 00:07:55.576 Directive Receive (1Ah): Supported 00:07:55.576 Virtualization Management (1Ch): Supported 00:07:55.576 Doorbell Buffer Config (7Ch): Supported 00:07:55.576 Format NVM (80h): Supported LBA-Change 00:07:55.576 I/O Commands 00:07:55.576 ------------ 00:07:55.576 Flush (00h): Supported LBA-Change 00:07:55.576 Write (01h): Supported LBA-Change 00:07:55.576 Read (02h): Supported 00:07:55.576 Compare (05h): Supported 00:07:55.576 Write Zeroes (08h): Supported LBA-Change 00:07:55.576 Dataset Management (09h): Supported LBA-Change 00:07:55.576 Unknown (0Ch): Supported 00:07:55.576 Unknown (12h): Supported 00:07:55.576 Copy (19h): Supported LBA-Change 00:07:55.576 Unknown (1Dh): Supported LBA-Change 00:07:55.576 00:07:55.576 Error Log 00:07:55.576 ========= 00:07:55.576 00:07:55.576 Arbitration 00:07:55.576 =========== 00:07:55.576 Arbitration Burst: no limit 00:07:55.576 00:07:55.576 Power Management 00:07:55.576 ================ 00:07:55.576 Number of Power States: 1 00:07:55.576 Current Power State: Power State #0 00:07:55.576 Power State #0: 00:07:55.576 Max Power: 25.00 W 00:07:55.576 Non-Operational State: Operational 00:07:55.576 Entry Latency: 16 microseconds 00:07:55.576 Exit Latency: 4 microseconds 00:07:55.576 Relative Read Throughput: 0 00:07:55.576 Relative Read Latency: 0 00:07:55.576 Relative Write Throughput: 0 00:07:55.576 Relative Write Latency: 0 00:07:55.576 Idle Power: Not Reported 00:07:55.576 Active Power: Not Reported 00:07:55.576 Non-Operational Permissive Mode: Not Supported 00:07:55.576 00:07:55.576 Health Information 00:07:55.576 ================== 00:07:55.576 Critical Warnings: 00:07:55.576 Available Spare Space: OK 00:07:55.576 Temperature: OK 00:07:55.576 Device Reliability: OK 00:07:55.576 Read Only: No 00:07:55.576 Volatile Memory Backup: OK 00:07:55.576 Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.576 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:55.576 Available Spare: 0% 00:07:55.576 Available Spare Threshold: 0% 00:07:55.576 Life Percentage Used: 0% 00:07:55.576 Data Units Read: 694 00:07:55.576 Data Units Written: 622 00:07:55.576 Host Read Commands: 37360 00:07:55.576 Host Write Commands: 37146 00:07:55.576 Controller Busy Time: 0 minutes 00:07:55.576 Power Cycles: 0 00:07:55.576 Power On Hours: 0 hours 00:07:55.576 Unsafe Shutdowns: 0 00:07:55.576 Unrecoverable Media Errors: 0 00:07:55.576 Lifetime Error Log Entries: 0 00:07:55.576 Warning Temperature Time: 0 minutes 00:07:55.576 Critical Temperature Time: 0 minutes 00:07:55.576 00:07:55.576 Number of Queues 00:07:55.576 ================ 00:07:55.576 Number of I/O Submission Queues: 64 00:07:55.577 Number of I/O Completion Queues: 64 00:07:55.577 00:07:55.577 ZNS Specific Controller Data 00:07:55.577 ============================ 00:07:55.577 Zone Append Size Limit: 0 00:07:55.577 00:07:55.577 00:07:55.577 Active Namespaces 00:07:55.577 ================= 00:07:55.577 Namespace ID:1 00:07:55.577 Error Recovery Timeout: Unlimited 00:07:55.577 Command Set Identifier: NVM (00h) 00:07:55.577 Deallocate: Supported 00:07:55.577 Deallocated/Unwritten Error: Supported 00:07:55.577 Deallocated Read Value: All 0x00 00:07:55.577 Deallocate in Write Zeroes: Not Supported 00:07:55.577 Deallocated Guard Field: 0xFFFF 00:07:55.577 Flush: Supported 00:07:55.577 Reservation: Not Supported 00:07:55.577 Metadata Transferred as: Separate Metadata Buffer 00:07:55.577 Namespace Sharing Capabilities: Private 00:07:55.577 Size (in LBAs): 1548666 (5GiB) 00:07:55.577 Capacity (in LBAs): 1548666 (5GiB) 00:07:55.577 Utilization (in LBAs): 1548666 (5GiB) 00:07:55.577 Thin Provisioning: Not Supported 00:07:55.577 Per-NS Atomic Units: No 00:07:55.577 Maximum Single Source Range Length: 128 00:07:55.577 Maximum Copy Length: 128 00:07:55.577 Maximum Source Range Count: 128 00:07:55.577 NGUID/EUI64 Never Reused: No 00:07:55.577 Namespace Write Protected: No 00:07:55.577 Number of LBA Formats: 8 00:07:55.577 Current LBA Format: LBA Format #07 00:07:55.577 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.577 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.577 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.577 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.577 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.577 LBA Forma[2024-11-18 03:07:59.116410] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75067 terminated unexpected 00:07:55.577 t #05: Data Size: 4096 Metadata Size: 8 00:07:55.577 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.577 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.577 00:07:55.577 NVM Specific Namespace Data 00:07:55.577 =========================== 00:07:55.577 Logical Block Storage Tag Mask: 0 00:07:55.577 Protection Information Capabilities: 00:07:55.577 16b Guard Protection Information Storage Tag Support: No 00:07:55.577 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.577 Storage Tag Check Read Support: No 00:07:55.577 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.577 ===================================================== 00:07:55.577 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:55.577 ===================================================== 00:07:55.577 Controller Capabilities/Features 00:07:55.577 ================================ 00:07:55.577 Vendor ID: 1b36 00:07:55.577 Subsystem Vendor ID: 1af4 00:07:55.577 Serial Number: 12342 00:07:55.577 Model Number: QEMU NVMe Ctrl 00:07:55.577 Firmware Version: 8.0.0 00:07:55.577 Recommended Arb Burst: 6 00:07:55.577 IEEE OUI Identifier: 00 54 52 00:07:55.577 Multi-path I/O 00:07:55.577 May have multiple subsystem ports: No 00:07:55.577 May have multiple controllers: No 00:07:55.577 Associated with SR-IOV VF: No 00:07:55.577 Max Data Transfer Size: 524288 00:07:55.577 Max Number of Namespaces: 256 00:07:55.577 Max Number of I/O Queues: 64 00:07:55.577 NVMe Specification Version (VS): 1.4 00:07:55.577 NVMe Specification Version (Identify): 1.4 00:07:55.577 Maximum Queue Entries: 2048 00:07:55.577 Contiguous Queues Required: Yes 00:07:55.577 Arbitration Mechanisms Supported 00:07:55.577 Weighted Round Robin: Not Supported 00:07:55.577 Vendor Specific: Not Supported 00:07:55.577 Reset Timeout: 7500 ms 00:07:55.577 Doorbell Stride: 4 bytes 00:07:55.577 NVM Subsystem Reset: Not Supported 00:07:55.577 Command Sets Supported 00:07:55.577 NVM Command Set: Supported 00:07:55.577 Boot Partition: Not Supported 00:07:55.578 Memory Page Size Minimum: 4096 bytes 00:07:55.578 Memory Page Size Maximum: 65536 bytes 00:07:55.578 Persistent Memory Region: Not Supported 00:07:55.578 Optional Asynchronous Events Supported 00:07:55.578 Namespace Attribute Notices: Supported 00:07:55.578 Firmware Activation Notices: Not Supported 00:07:55.578 ANA Change Notices: Not Supported 00:07:55.578 PLE Aggregate Log Change Notices: Not Supported 00:07:55.578 LBA Status Info Alert Notices: Not Supported 00:07:55.578 EGE Aggregate Log Change Notices: Not Supported 00:07:55.578 Normal NVM Subsystem Shutdown event: Not Supported 00:07:55.578 Zone Descriptor Change Notices: Not Supported 00:07:55.578 Discovery Log Change Notices: Not Supported 00:07:55.578 Controller Attributes 00:07:55.578 128-bit Host Identifier: Not Supported 00:07:55.578 Non-Operational Permissive Mode: Not Supported 00:07:55.578 NVM Sets: Not Supported 00:07:55.578 Read Recovery Levels: Not Supported 00:07:55.578 Endurance Groups: Not Supported 00:07:55.578 Predictable Latency Mode: Not Supported 00:07:55.578 Traffic Based Keep ALive: Not Supported 00:07:55.578 Namespace Granularity: Not Supported 00:07:55.578 SQ Associations: Not Supported 00:07:55.578 UUID List: Not Supported 00:07:55.578 Multi-Domain Subsystem: Not Supported 00:07:55.578 Fixed Capacity Management: Not Supported 00:07:55.578 Variable Capacity Management: Not Supported 00:07:55.578 Delete Endurance Group: Not Supported 00:07:55.578 Delete NVM Set: Not Supported 00:07:55.578 Extended LBA Formats Supported: Supported 00:07:55.578 Flexible Data Placement Supported: Not Supported 00:07:55.578 00:07:55.578 Controller Memory Buffer Support 00:07:55.578 ================================ 00:07:55.578 Supported: No 00:07:55.578 00:07:55.578 Persistent Memory Region Support 00:07:55.578 ================================ 00:07:55.578 Supported: No 00:07:55.578 00:07:55.578 Admin Command Set Attributes 00:07:55.578 ============================ 00:07:55.578 Security Send/Receive: Not Supported 00:07:55.578 Format NVM: Supported 00:07:55.578 Firmware Activate/Download: Not Supported 00:07:55.578 Namespace Management: Supported 00:07:55.578 Device Self-Test: Not Supported 00:07:55.578 Directives: Supported 00:07:55.578 NVMe-MI: Not Supported 00:07:55.578 Virtualization Management: Not Supported 00:07:55.578 Doorbell Buffer Config: Supported 00:07:55.578 Get LBA Status Capability: Not Supported 00:07:55.578 Command & Feature Lockdown Capability: Not Supported 00:07:55.578 Abort Command Limit: 4 00:07:55.578 Async Event Request Limit: 4 00:07:55.578 Number of Firmware Slots: N/A 00:07:55.578 Firmware Slot 1 Read-Only: N/A 00:07:55.578 Firmware Activation Without Reset: N/A 00:07:55.578 Multiple Update Detection Support: N/A 00:07:55.578 Firmware Update Granularity: No Information Provided 00:07:55.578 Per-Namespace SMART Log: Yes 00:07:55.578 Asymmetric Namespace Access Log Page: Not Supported 00:07:55.578 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:55.578 Command Effects Log Page: Supported 00:07:55.578 Get Log Page Extended Data: Supported 00:07:55.578 Telemetry Log Pages: Not Supported 00:07:55.578 Persistent Event Log Pages: Not Supported 00:07:55.578 Supported Log Pages Log Page: May Support 00:07:55.578 Commands Supported & Effects Log Page: Not Supported 00:07:55.578 Feature Identifiers & Effects Log Page:May Support 00:07:55.578 NVMe-MI Commands & Effects Log Page: May Support 00:07:55.578 Data Area 4 for Telemetry Log: Not Supported 00:07:55.578 Error Log Page Entries Supported: 1 00:07:55.578 Keep Alive: Not Supported 00:07:55.578 00:07:55.578 NVM Command Set Attributes 00:07:55.578 ========================== 00:07:55.578 Submission Queue Entry Size 00:07:55.578 Max: 64 00:07:55.578 Min: 64 00:07:55.578 Completion Queue Entry Size 00:07:55.578 Max: 16 00:07:55.578 Min: 16 00:07:55.578 Number of Namespaces: 256 00:07:55.578 Compare Command: Supported 00:07:55.578 Write Uncorrectable Command: Not Supported 00:07:55.578 Dataset Management Command: Supported 00:07:55.578 Write Zeroes Command: Supported 00:07:55.578 Set Features Save Field: Supported 00:07:55.578 Reservations: Not Supported 00:07:55.578 Timestamp: Supported 00:07:55.578 Copy: Supported 00:07:55.578 Volatile Write Cache: Present 00:07:55.578 Atomic Write Unit (Normal): 1 00:07:55.578 Atomic Write Unit (PFail): 1 00:07:55.578 Atomic Compare & Write Unit: 1 00:07:55.578 Fused Compare & Write: Not Supported 00:07:55.578 Scatter-Gather List 00:07:55.578 SGL Command Set: Supported 00:07:55.578 SGL Keyed: Not Supported 00:07:55.578 SGL Bit Bucket Descriptor: Not Supported 00:07:55.578 SGL Metadata Pointer: Not Supported 00:07:55.578 Oversized SGL: Not Supported 00:07:55.578 SGL Metadata Address: Not Supported 00:07:55.578 SGL Offset: Not Supported 00:07:55.578 Transport SGL Data Block: Not Supported 00:07:55.578 Replay Protected Memory Block: Not Supported 00:07:55.578 00:07:55.578 Firmware Slot Information 00:07:55.578 ========================= 00:07:55.578 Active slot: 1 00:07:55.578 Slot 1 Firmware Revision: 1.0 00:07:55.578 00:07:55.578 00:07:55.578 Commands Supported and Effects 00:07:55.578 ============================== 00:07:55.578 Admin Commands 00:07:55.578 -------------- 00:07:55.578 Delete I/O Submission Queue (00h): Supported 00:07:55.578 Create I/O Submission Queue (01h): Supported 00:07:55.578 Get Log Page (02h): Supported 00:07:55.578 Delete I/O Completion Queue (04h): Supported 00:07:55.578 Create I/O Completion Queue (05h): Supported 00:07:55.578 Identify (06h): Supported 00:07:55.578 Abort (08h): Supported 00:07:55.578 Set Features (09h): Supported 00:07:55.578 Get Features (0Ah): Supported 00:07:55.579 Asynchronous Event Request (0Ch): Supported 00:07:55.579 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:55.579 Directive Send (19h): Supported 00:07:55.579 Directive Receive (1Ah): Supported 00:07:55.579 Virtualization Management (1Ch): Supported 00:07:55.579 Doorbell Buffer Config (7Ch): Supported 00:07:55.579 Format NVM (80h): Supported LBA-Change 00:07:55.579 I/O Commands 00:07:55.579 ------------ 00:07:55.579 Flush (00h): Supported LBA-Change 00:07:55.579 Write (01h): Supported LBA-Change 00:07:55.579 Read (02h): Supported 00:07:55.579 Compare (05h): Supported 00:07:55.579 Write Zeroes (08h): Supported LBA-Change 00:07:55.579 Dataset Management (09h): Supported LBA-Change 00:07:55.579 Unknown (0Ch): Supported 00:07:55.579 Unknown (12h): Supported 00:07:55.579 Copy (19h): Supported LBA-Change 00:07:55.579 Unknown (1Dh): Supported LBA-Change 00:07:55.579 00:07:55.579 Error Log 00:07:55.579 ========= 00:07:55.579 00:07:55.579 Arbitration 00:07:55.579 =========== 00:07:55.579 Arbitration Burst: no limit 00:07:55.579 00:07:55.579 Power Management 00:07:55.579 ================ 00:07:55.579 Number of Power States: 1 00:07:55.579 Current Power State: Power State #0 00:07:55.579 Power State #0: 00:07:55.579 Max Power: 25.00 W 00:07:55.579 Non-Operational State: Operational 00:07:55.579 Entry Latency: 16 microseconds 00:07:55.579 Exit Latency: 4 microseconds 00:07:55.579 Relative Read Throughput: 0 00:07:55.579 Relative Read Latency: 0 00:07:55.579 Relative Write Throughput: 0 00:07:55.579 Relative Write Latency: 0 00:07:55.579 Idle Power: Not Reported 00:07:55.579 Active Power: Not Reported 00:07:55.579 Non-Operational Permissive Mode: Not Supported 00:07:55.579 00:07:55.579 Health Information 00:07:55.579 ================== 00:07:55.579 Critical Warnings: 00:07:55.579 Available Spare Space: OK 00:07:55.579 Temperature: OK 00:07:55.579 Device Reliability: OK 00:07:55.579 Read Only: No 00:07:55.579 Volatile Memory Backup: OK 00:07:55.579 Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.579 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:55.579 Available Spare: 0% 00:07:55.579 Available Spare Threshold: 0% 00:07:55.579 Life Percentage Used: 0% 00:07:55.579 Data Units Read: 2230 00:07:55.579 Data Units Written: 2017 00:07:55.579 Host Read Commands: 113299 00:07:55.579 Host Write Commands: 111568 00:07:55.579 Controller Busy Time: 0 minutes 00:07:55.579 Power Cycles: 0 00:07:55.579 Power On Hours: 0 hours 00:07:55.579 Unsafe Shutdowns: 0 00:07:55.579 Unrecoverable Media Errors: 0 00:07:55.579 Lifetime Error Log Entries: 0 00:07:55.579 Warning Temperature Time: 0 minutes 00:07:55.579 Critical Temperature Time: 0 minutes 00:07:55.579 00:07:55.579 Number of Queues 00:07:55.579 ================ 00:07:55.579 Number of I/O Submission Queues: 64 00:07:55.579 Number of I/O Completion Queues: 64 00:07:55.579 00:07:55.579 ZNS Specific Controller Data 00:07:55.579 ============================ 00:07:55.579 Zone Append Size Limit: 0 00:07:55.579 00:07:55.579 00:07:55.579 Active Namespaces 00:07:55.579 ================= 00:07:55.579 Namespace ID:1 00:07:55.579 Error Recovery Timeout: Unlimited 00:07:55.579 Command Set Identifier: NVM (00h) 00:07:55.579 Deallocate: Supported 00:07:55.579 Deallocated/Unwritten Error: Supported 00:07:55.579 Deallocated Read Value: All 0x00 00:07:55.579 Deallocate in Write Zeroes: Not Supported 00:07:55.579 Deallocated Guard Field: 0xFFFF 00:07:55.579 Flush: Supported 00:07:55.579 Reservation: Not Supported 00:07:55.579 Namespace Sharing Capabilities: Private 00:07:55.579 Size (in LBAs): 1048576 (4GiB) 00:07:55.579 Capacity (in LBAs): 1048576 (4GiB) 00:07:55.579 Utilization (in LBAs): 1048576 (4GiB) 00:07:55.579 Thin Provisioning: Not Supported 00:07:55.579 Per-NS Atomic Units: No 00:07:55.579 Maximum Single Source Range Length: 128 00:07:55.579 Maximum Copy Length: 128 00:07:55.579 Maximum Source Range Count: 128 00:07:55.579 NGUID/EUI64 Never Reused: No 00:07:55.579 Namespace Write Protected: No 00:07:55.579 Number of LBA Formats: 8 00:07:55.579 Current LBA Format: LBA Format #04 00:07:55.579 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.579 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.579 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.579 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.579 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.579 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.579 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.579 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.579 00:07:55.579 NVM Specific Namespace Data 00:07:55.579 =========================== 00:07:55.579 Logical Block Storage Tag Mask: 0 00:07:55.579 Protection Information Capabilities: 00:07:55.579 16b Guard Protection Information Storage Tag Support: No 00:07:55.579 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.579 Storage Tag Check Read Support: No 00:07:55.579 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.579 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.579 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.579 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.579 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.579 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Namespace ID:2 00:07:55.580 Error Recovery Timeout: Unlimited 00:07:55.580 Command Set Identifier: NVM (00h) 00:07:55.580 Deallocate: Supported 00:07:55.580 Deallocated/Unwritten Error: Supported 00:07:55.580 Deallocated Read Value: All 0x00 00:07:55.580 Deallocate in Write Zeroes: Not Supported 00:07:55.580 Deallocated Guard Field: 0xFFFF 00:07:55.580 Flush: Supported 00:07:55.580 Reservation: Not Supported 00:07:55.580 Namespace Sharing Capabilities: Private 00:07:55.580 Size (in LBAs): 1048576 (4GiB) 00:07:55.580 Capacity (in LBAs): 1048576 (4GiB) 00:07:55.580 Utilization (in LBAs): 1048576 (4GiB) 00:07:55.580 Thin Provisioning: Not Supported 00:07:55.580 Per-NS Atomic Units: No 00:07:55.580 Maximum Single Source Range Length: 128 00:07:55.580 Maximum Copy Length: 128 00:07:55.580 Maximum Source Range Count: 128 00:07:55.580 NGUID/EUI64 Never Reused: No 00:07:55.580 Namespace Write Protected: No 00:07:55.580 Number of LBA Formats: 8 00:07:55.580 Current LBA Format: LBA Format #04 00:07:55.580 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.580 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.580 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.580 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.580 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.580 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.580 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.580 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.580 00:07:55.580 NVM Specific Namespace Data 00:07:55.580 =========================== 00:07:55.580 Logical Block Storage Tag Mask: 0 00:07:55.580 Protection Information Capabilities: 00:07:55.580 16b Guard Protection Information Storage Tag Support: No 00:07:55.580 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.580 Storage Tag Check Read Support: No 00:07:55.580 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.580 Namespace ID:3 00:07:55.580 Error Recovery Timeout: Unlimited 00:07:55.580 Command Set Identifier: NVM (00h) 00:07:55.580 Deallocate: Supported 00:07:55.580 Deallocated/Unwritten Error: Supported 00:07:55.580 Deallocated Read Value: All 0x00 00:07:55.580 Deallocate in Write Zeroes: Not Supported 00:07:55.580 Deallocated Guard Field: 0xFFFF 00:07:55.580 Flush: Supported 00:07:55.580 Reservation: Not Supported 00:07:55.580 Namespace Sharing Capabilities: Private 00:07:55.580 Size (in LBAs): 1048576 (4GiB) 00:07:55.841 Capacity (in LBAs): 1048576 (4GiB) 00:07:55.841 Utilization (in LBAs): 1048576 (4GiB) 00:07:55.841 Thin Provisioning: Not Supported 00:07:55.841 Per-NS Atomic Units: No 00:07:55.841 Maximum Single Source Range Length: 128 00:07:55.841 Maximum Copy Length: 128 00:07:55.841 Maximum Source Range Count: 128 00:07:55.841 NGUID/EUI64 Never Reused: No 00:07:55.841 Namespace Write Protected: No 00:07:55.841 Number of LBA Formats: 8 00:07:55.841 Current LBA Format: LBA Format #04 00:07:55.841 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.841 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.841 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.841 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.841 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.841 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.841 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.841 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.841 00:07:55.841 NVM Specific Namespace Data 00:07:55.841 =========================== 00:07:55.841 Logical Block Storage Tag Mask: 0 00:07:55.841 Protection Information Capabilities: 00:07:55.841 16b Guard Protection Information Storage Tag Support: No 00:07:55.841 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.841 Storage Tag Check Read Support: No 00:07:55.841 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.841 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:55.841 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:55.841 ===================================================== 00:07:55.841 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:55.841 ===================================================== 00:07:55.841 Controller Capabilities/Features 00:07:55.841 ================================ 00:07:55.841 Vendor ID: 1b36 00:07:55.841 Subsystem Vendor ID: 1af4 00:07:55.841 Serial Number: 12340 00:07:55.841 Model Number: QEMU NVMe Ctrl 00:07:55.841 Firmware Version: 8.0.0 00:07:55.841 Recommended Arb Burst: 6 00:07:55.841 IEEE OUI Identifier: 00 54 52 00:07:55.841 Multi-path I/O 00:07:55.841 May have multiple subsystem ports: No 00:07:55.841 May have multiple controllers: No 00:07:55.841 Associated with SR-IOV VF: No 00:07:55.841 Max Data Transfer Size: 524288 00:07:55.841 Max Number of Namespaces: 256 00:07:55.841 Max Number of I/O Queues: 64 00:07:55.841 NVMe Specification Version (VS): 1.4 00:07:55.841 NVMe Specification Version (Identify): 1.4 00:07:55.841 Maximum Queue Entries: 2048 00:07:55.841 Contiguous Queues Required: Yes 00:07:55.841 Arbitration Mechanisms Supported 00:07:55.841 Weighted Round Robin: Not Supported 00:07:55.841 Vendor Specific: Not Supported 00:07:55.841 Reset Timeout: 7500 ms 00:07:55.841 Doorbell Stride: 4 bytes 00:07:55.841 NVM Subsystem Reset: Not Supported 00:07:55.841 Command Sets Supported 00:07:55.841 NVM Command Set: Supported 00:07:55.841 Boot Partition: Not Supported 00:07:55.841 Memory Page Size Minimum: 4096 bytes 00:07:55.841 Memory Page Size Maximum: 65536 bytes 00:07:55.841 Persistent Memory Region: Not Supported 00:07:55.841 Optional Asynchronous Events Supported 00:07:55.841 Namespace Attribute Notices: Supported 00:07:55.841 Firmware Activation Notices: Not Supported 00:07:55.841 ANA Change Notices: Not Supported 00:07:55.841 PLE Aggregate Log Change Notices: Not Supported 00:07:55.841 LBA Status Info Alert Notices: Not Supported 00:07:55.841 EGE Aggregate Log Change Notices: Not Supported 00:07:55.841 Normal NVM Subsystem Shutdown event: Not Supported 00:07:55.841 Zone Descriptor Change Notices: Not Supported 00:07:55.841 Discovery Log Change Notices: Not Supported 00:07:55.841 Controller Attributes 00:07:55.841 128-bit Host Identifier: Not Supported 00:07:55.841 Non-Operational Permissive Mode: Not Supported 00:07:55.841 NVM Sets: Not Supported 00:07:55.841 Read Recovery Levels: Not Supported 00:07:55.841 Endurance Groups: Not Supported 00:07:55.841 Predictable Latency Mode: Not Supported 00:07:55.841 Traffic Based Keep ALive: Not Supported 00:07:55.841 Namespace Granularity: Not Supported 00:07:55.841 SQ Associations: Not Supported 00:07:55.841 UUID List: Not Supported 00:07:55.841 Multi-Domain Subsystem: Not Supported 00:07:55.841 Fixed Capacity Management: Not Supported 00:07:55.841 Variable Capacity Management: Not Supported 00:07:55.841 Delete Endurance Group: Not Supported 00:07:55.841 Delete NVM Set: Not Supported 00:07:55.841 Extended LBA Formats Supported: Supported 00:07:55.841 Flexible Data Placement Supported: Not Supported 00:07:55.841 00:07:55.841 Controller Memory Buffer Support 00:07:55.841 ================================ 00:07:55.841 Supported: No 00:07:55.841 00:07:55.841 Persistent Memory Region Support 00:07:55.841 ================================ 00:07:55.841 Supported: No 00:07:55.841 00:07:55.841 Admin Command Set Attributes 00:07:55.841 ============================ 00:07:55.841 Security Send/Receive: Not Supported 00:07:55.841 Format NVM: Supported 00:07:55.841 Firmware Activate/Download: Not Supported 00:07:55.841 Namespace Management: Supported 00:07:55.841 Device Self-Test: Not Supported 00:07:55.841 Directives: Supported 00:07:55.841 NVMe-MI: Not Supported 00:07:55.841 Virtualization Management: Not Supported 00:07:55.841 Doorbell Buffer Config: Supported 00:07:55.841 Get LBA Status Capability: Not Supported 00:07:55.841 Command & Feature Lockdown Capability: Not Supported 00:07:55.841 Abort Command Limit: 4 00:07:55.841 Async Event Request Limit: 4 00:07:55.841 Number of Firmware Slots: N/A 00:07:55.841 Firmware Slot 1 Read-Only: N/A 00:07:55.841 Firmware Activation Without Reset: N/A 00:07:55.841 Multiple Update Detection Support: N/A 00:07:55.841 Firmware Update Granularity: No Information Provided 00:07:55.841 Per-Namespace SMART Log: Yes 00:07:55.841 Asymmetric Namespace Access Log Page: Not Supported 00:07:55.841 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:55.841 Command Effects Log Page: Supported 00:07:55.841 Get Log Page Extended Data: Supported 00:07:55.841 Telemetry Log Pages: Not Supported 00:07:55.841 Persistent Event Log Pages: Not Supported 00:07:55.841 Supported Log Pages Log Page: May Support 00:07:55.841 Commands Supported & Effects Log Page: Not Supported 00:07:55.841 Feature Identifiers & Effects Log Page:May Support 00:07:55.841 NVMe-MI Commands & Effects Log Page: May Support 00:07:55.841 Data Area 4 for Telemetry Log: Not Supported 00:07:55.841 Error Log Page Entries Supported: 1 00:07:55.841 Keep Alive: Not Supported 00:07:55.841 00:07:55.841 NVM Command Set Attributes 00:07:55.841 ========================== 00:07:55.841 Submission Queue Entry Size 00:07:55.841 Max: 64 00:07:55.841 Min: 64 00:07:55.841 Completion Queue Entry Size 00:07:55.841 Max: 16 00:07:55.841 Min: 16 00:07:55.841 Number of Namespaces: 256 00:07:55.841 Compare Command: Supported 00:07:55.841 Write Uncorrectable Command: Not Supported 00:07:55.841 Dataset Management Command: Supported 00:07:55.841 Write Zeroes Command: Supported 00:07:55.841 Set Features Save Field: Supported 00:07:55.841 Reservations: Not Supported 00:07:55.841 Timestamp: Supported 00:07:55.841 Copy: Supported 00:07:55.841 Volatile Write Cache: Present 00:07:55.841 Atomic Write Unit (Normal): 1 00:07:55.841 Atomic Write Unit (PFail): 1 00:07:55.841 Atomic Compare & Write Unit: 1 00:07:55.841 Fused Compare & Write: Not Supported 00:07:55.841 Scatter-Gather List 00:07:55.841 SGL Command Set: Supported 00:07:55.841 SGL Keyed: Not Supported 00:07:55.841 SGL Bit Bucket Descriptor: Not Supported 00:07:55.841 SGL Metadata Pointer: Not Supported 00:07:55.841 Oversized SGL: Not Supported 00:07:55.841 SGL Metadata Address: Not Supported 00:07:55.841 SGL Offset: Not Supported 00:07:55.841 Transport SGL Data Block: Not Supported 00:07:55.841 Replay Protected Memory Block: Not Supported 00:07:55.841 00:07:55.841 Firmware Slot Information 00:07:55.841 ========================= 00:07:55.841 Active slot: 1 00:07:55.841 Slot 1 Firmware Revision: 1.0 00:07:55.841 00:07:55.841 00:07:55.841 Commands Supported and Effects 00:07:55.841 ============================== 00:07:55.841 Admin Commands 00:07:55.841 -------------- 00:07:55.841 Delete I/O Submission Queue (00h): Supported 00:07:55.841 Create I/O Submission Queue (01h): Supported 00:07:55.841 Get Log Page (02h): Supported 00:07:55.841 Delete I/O Completion Queue (04h): Supported 00:07:55.841 Create I/O Completion Queue (05h): Supported 00:07:55.841 Identify (06h): Supported 00:07:55.841 Abort (08h): Supported 00:07:55.841 Set Features (09h): Supported 00:07:55.841 Get Features (0Ah): Supported 00:07:55.841 Asynchronous Event Request (0Ch): Supported 00:07:55.841 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:55.841 Directive Send (19h): Supported 00:07:55.841 Directive Receive (1Ah): Supported 00:07:55.841 Virtualization Management (1Ch): Supported 00:07:55.841 Doorbell Buffer Config (7Ch): Supported 00:07:55.841 Format NVM (80h): Supported LBA-Change 00:07:55.841 I/O Commands 00:07:55.841 ------------ 00:07:55.841 Flush (00h): Supported LBA-Change 00:07:55.841 Write (01h): Supported LBA-Change 00:07:55.841 Read (02h): Supported 00:07:55.842 Compare (05h): Supported 00:07:55.842 Write Zeroes (08h): Supported LBA-Change 00:07:55.842 Dataset Management (09h): Supported LBA-Change 00:07:55.842 Unknown (0Ch): Supported 00:07:55.842 Unknown (12h): Supported 00:07:55.842 Copy (19h): Supported LBA-Change 00:07:55.842 Unknown (1Dh): Supported LBA-Change 00:07:55.842 00:07:55.842 Error Log 00:07:55.842 ========= 00:07:55.842 00:07:55.842 Arbitration 00:07:55.842 =========== 00:07:55.842 Arbitration Burst: no limit 00:07:55.842 00:07:55.842 Power Management 00:07:55.842 ================ 00:07:55.842 Number of Power States: 1 00:07:55.842 Current Power State: Power State #0 00:07:55.842 Power State #0: 00:07:55.842 Max Power: 25.00 W 00:07:55.842 Non-Operational State: Operational 00:07:55.842 Entry Latency: 16 microseconds 00:07:55.842 Exit Latency: 4 microseconds 00:07:55.842 Relative Read Throughput: 0 00:07:55.842 Relative Read Latency: 0 00:07:55.842 Relative Write Throughput: 0 00:07:55.842 Relative Write Latency: 0 00:07:55.842 Idle Power: Not Reported 00:07:55.842 Active Power: Not Reported 00:07:55.842 Non-Operational Permissive Mode: Not Supported 00:07:55.842 00:07:55.842 Health Information 00:07:55.842 ================== 00:07:55.842 Critical Warnings: 00:07:55.842 Available Spare Space: OK 00:07:55.842 Temperature: OK 00:07:55.842 Device Reliability: OK 00:07:55.842 Read Only: No 00:07:55.842 Volatile Memory Backup: OK 00:07:55.842 Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.842 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:55.842 Available Spare: 0% 00:07:55.842 Available Spare Threshold: 0% 00:07:55.842 Life Percentage Used: 0% 00:07:55.842 Data Units Read: 694 00:07:55.842 Data Units Written: 622 00:07:55.842 Host Read Commands: 37360 00:07:55.842 Host Write Commands: 37146 00:07:55.842 Controller Busy Time: 0 minutes 00:07:55.842 Power Cycles: 0 00:07:55.842 Power On Hours: 0 hours 00:07:55.842 Unsafe Shutdowns: 0 00:07:55.842 Unrecoverable Media Errors: 0 00:07:55.842 Lifetime Error Log Entries: 0 00:07:55.842 Warning Temperature Time: 0 minutes 00:07:55.842 Critical Temperature Time: 0 minutes 00:07:55.842 00:07:55.842 Number of Queues 00:07:55.842 ================ 00:07:55.842 Number of I/O Submission Queues: 64 00:07:55.842 Number of I/O Completion Queues: 64 00:07:55.842 00:07:55.842 ZNS Specific Controller Data 00:07:55.842 ============================ 00:07:55.842 Zone Append Size Limit: 0 00:07:55.842 00:07:55.842 00:07:55.842 Active Namespaces 00:07:55.842 ================= 00:07:55.842 Namespace ID:1 00:07:55.842 Error Recovery Timeout: Unlimited 00:07:55.842 Command Set Identifier: NVM (00h) 00:07:55.842 Deallocate: Supported 00:07:55.842 Deallocated/Unwritten Error: Supported 00:07:55.842 Deallocated Read Value: All 0x00 00:07:55.842 Deallocate in Write Zeroes: Not Supported 00:07:55.842 Deallocated Guard Field: 0xFFFF 00:07:55.842 Flush: Supported 00:07:55.842 Reservation: Not Supported 00:07:55.842 Metadata Transferred as: Separate Metadata Buffer 00:07:55.842 Namespace Sharing Capabilities: Private 00:07:55.842 Size (in LBAs): 1548666 (5GiB) 00:07:55.842 Capacity (in LBAs): 1548666 (5GiB) 00:07:55.842 Utilization (in LBAs): 1548666 (5GiB) 00:07:55.842 Thin Provisioning: Not Supported 00:07:55.842 Per-NS Atomic Units: No 00:07:55.842 Maximum Single Source Range Length: 128 00:07:55.842 Maximum Copy Length: 128 00:07:55.842 Maximum Source Range Count: 128 00:07:55.842 NGUID/EUI64 Never Reused: No 00:07:55.842 Namespace Write Protected: No 00:07:55.842 Number of LBA Formats: 8 00:07:55.842 Current LBA Format: LBA Format #07 00:07:55.842 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.842 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.842 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.842 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.842 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.842 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.842 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.842 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.842 00:07:55.842 NVM Specific Namespace Data 00:07:55.842 =========================== 00:07:55.842 Logical Block Storage Tag Mask: 0 00:07:55.842 Protection Information Capabilities: 00:07:55.842 16b Guard Protection Information Storage Tag Support: No 00:07:55.842 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.842 Storage Tag Check Read Support: No 00:07:55.842 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.842 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:55.842 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:56.103 ===================================================== 00:07:56.103 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.103 ===================================================== 00:07:56.103 Controller Capabilities/Features 00:07:56.103 ================================ 00:07:56.103 Vendor ID: 1b36 00:07:56.103 Subsystem Vendor ID: 1af4 00:07:56.103 Serial Number: 12341 00:07:56.103 Model Number: QEMU NVMe Ctrl 00:07:56.103 Firmware Version: 8.0.0 00:07:56.103 Recommended Arb Burst: 6 00:07:56.103 IEEE OUI Identifier: 00 54 52 00:07:56.103 Multi-path I/O 00:07:56.103 May have multiple subsystem ports: No 00:07:56.103 May have multiple controllers: No 00:07:56.103 Associated with SR-IOV VF: No 00:07:56.103 Max Data Transfer Size: 524288 00:07:56.103 Max Number of Namespaces: 256 00:07:56.103 Max Number of I/O Queues: 64 00:07:56.103 NVMe Specification Version (VS): 1.4 00:07:56.103 NVMe Specification Version (Identify): 1.4 00:07:56.103 Maximum Queue Entries: 2048 00:07:56.103 Contiguous Queues Required: Yes 00:07:56.103 Arbitration Mechanisms Supported 00:07:56.103 Weighted Round Robin: Not Supported 00:07:56.103 Vendor Specific: Not Supported 00:07:56.103 Reset Timeout: 7500 ms 00:07:56.103 Doorbell Stride: 4 bytes 00:07:56.103 NVM Subsystem Reset: Not Supported 00:07:56.103 Command Sets Supported 00:07:56.103 NVM Command Set: Supported 00:07:56.103 Boot Partition: Not Supported 00:07:56.103 Memory Page Size Minimum: 4096 bytes 00:07:56.103 Memory Page Size Maximum: 65536 bytes 00:07:56.103 Persistent Memory Region: Not Supported 00:07:56.103 Optional Asynchronous Events Supported 00:07:56.103 Namespace Attribute Notices: Supported 00:07:56.104 Firmware Activation Notices: Not Supported 00:07:56.104 ANA Change Notices: Not Supported 00:07:56.104 PLE Aggregate Log Change Notices: Not Supported 00:07:56.104 LBA Status Info Alert Notices: Not Supported 00:07:56.104 EGE Aggregate Log Change Notices: Not Supported 00:07:56.104 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.104 Zone Descriptor Change Notices: Not Supported 00:07:56.104 Discovery Log Change Notices: Not Supported 00:07:56.104 Controller Attributes 00:07:56.104 128-bit Host Identifier: Not Supported 00:07:56.104 Non-Operational Permissive Mode: Not Supported 00:07:56.104 NVM Sets: Not Supported 00:07:56.104 Read Recovery Levels: Not Supported 00:07:56.104 Endurance Groups: Not Supported 00:07:56.104 Predictable Latency Mode: Not Supported 00:07:56.104 Traffic Based Keep ALive: Not Supported 00:07:56.104 Namespace Granularity: Not Supported 00:07:56.104 SQ Associations: Not Supported 00:07:56.104 UUID List: Not Supported 00:07:56.104 Multi-Domain Subsystem: Not Supported 00:07:56.104 Fixed Capacity Management: Not Supported 00:07:56.104 Variable Capacity Management: Not Supported 00:07:56.104 Delete Endurance Group: Not Supported 00:07:56.104 Delete NVM Set: Not Supported 00:07:56.104 Extended LBA Formats Supported: Supported 00:07:56.104 Flexible Data Placement Supported: Not Supported 00:07:56.104 00:07:56.104 Controller Memory Buffer Support 00:07:56.104 ================================ 00:07:56.104 Supported: No 00:07:56.104 00:07:56.104 Persistent Memory Region Support 00:07:56.104 ================================ 00:07:56.104 Supported: No 00:07:56.104 00:07:56.104 Admin Command Set Attributes 00:07:56.104 ============================ 00:07:56.104 Security Send/Receive: Not Supported 00:07:56.104 Format NVM: Supported 00:07:56.104 Firmware Activate/Download: Not Supported 00:07:56.104 Namespace Management: Supported 00:07:56.104 Device Self-Test: Not Supported 00:07:56.104 Directives: Supported 00:07:56.104 NVMe-MI: Not Supported 00:07:56.104 Virtualization Management: Not Supported 00:07:56.104 Doorbell Buffer Config: Supported 00:07:56.104 Get LBA Status Capability: Not Supported 00:07:56.104 Command & Feature Lockdown Capability: Not Supported 00:07:56.104 Abort Command Limit: 4 00:07:56.104 Async Event Request Limit: 4 00:07:56.104 Number of Firmware Slots: N/A 00:07:56.104 Firmware Slot 1 Read-Only: N/A 00:07:56.104 Firmware Activation Without Reset: N/A 00:07:56.104 Multiple Update Detection Support: N/A 00:07:56.104 Firmware Update Granularity: No Information Provided 00:07:56.104 Per-Namespace SMART Log: Yes 00:07:56.104 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.104 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.104 Command Effects Log Page: Supported 00:07:56.104 Get Log Page Extended Data: Supported 00:07:56.104 Telemetry Log Pages: Not Supported 00:07:56.104 Persistent Event Log Pages: Not Supported 00:07:56.104 Supported Log Pages Log Page: May Support 00:07:56.104 Commands Supported & Effects Log Page: Not Supported 00:07:56.104 Feature Identifiers & Effects Log Page:May Support 00:07:56.104 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.104 Data Area 4 for Telemetry Log: Not Supported 00:07:56.104 Error Log Page Entries Supported: 1 00:07:56.104 Keep Alive: Not Supported 00:07:56.104 00:07:56.104 NVM Command Set Attributes 00:07:56.104 ========================== 00:07:56.104 Submission Queue Entry Size 00:07:56.104 Max: 64 00:07:56.104 Min: 64 00:07:56.104 Completion Queue Entry Size 00:07:56.104 Max: 16 00:07:56.104 Min: 16 00:07:56.104 Number of Namespaces: 256 00:07:56.104 Compare Command: Supported 00:07:56.104 Write Uncorrectable Command: Not Supported 00:07:56.104 Dataset Management Command: Supported 00:07:56.104 Write Zeroes Command: Supported 00:07:56.104 Set Features Save Field: Supported 00:07:56.104 Reservations: Not Supported 00:07:56.104 Timestamp: Supported 00:07:56.104 Copy: Supported 00:07:56.104 Volatile Write Cache: Present 00:07:56.104 Atomic Write Unit (Normal): 1 00:07:56.104 Atomic Write Unit (PFail): 1 00:07:56.104 Atomic Compare & Write Unit: 1 00:07:56.104 Fused Compare & Write: Not Supported 00:07:56.104 Scatter-Gather List 00:07:56.104 SGL Command Set: Supported 00:07:56.104 SGL Keyed: Not Supported 00:07:56.104 SGL Bit Bucket Descriptor: Not Supported 00:07:56.104 SGL Metadata Pointer: Not Supported 00:07:56.104 Oversized SGL: Not Supported 00:07:56.104 SGL Metadata Address: Not Supported 00:07:56.104 SGL Offset: Not Supported 00:07:56.104 Transport SGL Data Block: Not Supported 00:07:56.104 Replay Protected Memory Block: Not Supported 00:07:56.104 00:07:56.104 Firmware Slot Information 00:07:56.104 ========================= 00:07:56.104 Active slot: 1 00:07:56.104 Slot 1 Firmware Revision: 1.0 00:07:56.104 00:07:56.104 00:07:56.104 Commands Supported and Effects 00:07:56.104 ============================== 00:07:56.104 Admin Commands 00:07:56.104 -------------- 00:07:56.104 Delete I/O Submission Queue (00h): Supported 00:07:56.104 Create I/O Submission Queue (01h): Supported 00:07:56.104 Get Log Page (02h): Supported 00:07:56.104 Delete I/O Completion Queue (04h): Supported 00:07:56.104 Create I/O Completion Queue (05h): Supported 00:07:56.104 Identify (06h): Supported 00:07:56.104 Abort (08h): Supported 00:07:56.104 Set Features (09h): Supported 00:07:56.104 Get Features (0Ah): Supported 00:07:56.104 Asynchronous Event Request (0Ch): Supported 00:07:56.104 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.104 Directive Send (19h): Supported 00:07:56.104 Directive Receive (1Ah): Supported 00:07:56.104 Virtualization Management (1Ch): Supported 00:07:56.104 Doorbell Buffer Config (7Ch): Supported 00:07:56.104 Format NVM (80h): Supported LBA-Change 00:07:56.104 I/O Commands 00:07:56.104 ------------ 00:07:56.104 Flush (00h): Supported LBA-Change 00:07:56.104 Write (01h): Supported LBA-Change 00:07:56.104 Read (02h): Supported 00:07:56.104 Compare (05h): Supported 00:07:56.104 Write Zeroes (08h): Supported LBA-Change 00:07:56.104 Dataset Management (09h): Supported LBA-Change 00:07:56.104 Unknown (0Ch): Supported 00:07:56.104 Unknown (12h): Supported 00:07:56.104 Copy (19h): Supported LBA-Change 00:07:56.104 Unknown (1Dh): Supported LBA-Change 00:07:56.104 00:07:56.104 Error Log 00:07:56.104 ========= 00:07:56.104 00:07:56.104 Arbitration 00:07:56.104 =========== 00:07:56.104 Arbitration Burst: no limit 00:07:56.104 00:07:56.104 Power Management 00:07:56.104 ================ 00:07:56.104 Number of Power States: 1 00:07:56.104 Current Power State: Power State #0 00:07:56.104 Power State #0: 00:07:56.104 Max Power: 25.00 W 00:07:56.104 Non-Operational State: Operational 00:07:56.104 Entry Latency: 16 microseconds 00:07:56.104 Exit Latency: 4 microseconds 00:07:56.104 Relative Read Throughput: 0 00:07:56.104 Relative Read Latency: 0 00:07:56.104 Relative Write Throughput: 0 00:07:56.104 Relative Write Latency: 0 00:07:56.105 Idle Power: Not Reported 00:07:56.105 Active Power: Not Reported 00:07:56.105 Non-Operational Permissive Mode: Not Supported 00:07:56.105 00:07:56.105 Health Information 00:07:56.105 ================== 00:07:56.105 Critical Warnings: 00:07:56.105 Available Spare Space: OK 00:07:56.105 Temperature: OK 00:07:56.105 Device Reliability: OK 00:07:56.105 Read Only: No 00:07:56.105 Volatile Memory Backup: OK 00:07:56.105 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.105 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.105 Available Spare: 0% 00:07:56.105 Available Spare Threshold: 0% 00:07:56.105 Life Percentage Used: 0% 00:07:56.105 Data Units Read: 1054 00:07:56.105 Data Units Written: 921 00:07:56.105 Host Read Commands: 53796 00:07:56.105 Host Write Commands: 52566 00:07:56.105 Controller Busy Time: 0 minutes 00:07:56.105 Power Cycles: 0 00:07:56.105 Power On Hours: 0 hours 00:07:56.105 Unsafe Shutdowns: 0 00:07:56.105 Unrecoverable Media Errors: 0 00:07:56.105 Lifetime Error Log Entries: 0 00:07:56.105 Warning Temperature Time: 0 minutes 00:07:56.105 Critical Temperature Time: 0 minutes 00:07:56.105 00:07:56.105 Number of Queues 00:07:56.105 ================ 00:07:56.105 Number of I/O Submission Queues: 64 00:07:56.105 Number of I/O Completion Queues: 64 00:07:56.105 00:07:56.105 ZNS Specific Controller Data 00:07:56.105 ============================ 00:07:56.105 Zone Append Size Limit: 0 00:07:56.105 00:07:56.105 00:07:56.105 Active Namespaces 00:07:56.105 ================= 00:07:56.105 Namespace ID:1 00:07:56.105 Error Recovery Timeout: Unlimited 00:07:56.105 Command Set Identifier: NVM (00h) 00:07:56.105 Deallocate: Supported 00:07:56.105 Deallocated/Unwritten Error: Supported 00:07:56.105 Deallocated Read Value: All 0x00 00:07:56.105 Deallocate in Write Zeroes: Not Supported 00:07:56.105 Deallocated Guard Field: 0xFFFF 00:07:56.105 Flush: Supported 00:07:56.105 Reservation: Not Supported 00:07:56.105 Namespace Sharing Capabilities: Private 00:07:56.105 Size (in LBAs): 1310720 (5GiB) 00:07:56.105 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.105 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.105 Thin Provisioning: Not Supported 00:07:56.105 Per-NS Atomic Units: No 00:07:56.105 Maximum Single Source Range Length: 128 00:07:56.105 Maximum Copy Length: 128 00:07:56.105 Maximum Source Range Count: 128 00:07:56.105 NGUID/EUI64 Never Reused: No 00:07:56.105 Namespace Write Protected: No 00:07:56.105 Number of LBA Formats: 8 00:07:56.105 Current LBA Format: LBA Format #04 00:07:56.105 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.105 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.105 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.105 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.105 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.105 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.105 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.105 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.105 00:07:56.105 NVM Specific Namespace Data 00:07:56.105 =========================== 00:07:56.105 Logical Block Storage Tag Mask: 0 00:07:56.105 Protection Information Capabilities: 00:07:56.105 16b Guard Protection Information Storage Tag Support: No 00:07:56.105 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.105 Storage Tag Check Read Support: No 00:07:56.105 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.105 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.105 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:56.367 ===================================================== 00:07:56.367 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.367 ===================================================== 00:07:56.367 Controller Capabilities/Features 00:07:56.367 ================================ 00:07:56.367 Vendor ID: 1b36 00:07:56.367 Subsystem Vendor ID: 1af4 00:07:56.367 Serial Number: 12342 00:07:56.367 Model Number: QEMU NVMe Ctrl 00:07:56.367 Firmware Version: 8.0.0 00:07:56.367 Recommended Arb Burst: 6 00:07:56.367 IEEE OUI Identifier: 00 54 52 00:07:56.367 Multi-path I/O 00:07:56.367 May have multiple subsystem ports: No 00:07:56.367 May have multiple controllers: No 00:07:56.367 Associated with SR-IOV VF: No 00:07:56.367 Max Data Transfer Size: 524288 00:07:56.367 Max Number of Namespaces: 256 00:07:56.367 Max Number of I/O Queues: 64 00:07:56.367 NVMe Specification Version (VS): 1.4 00:07:56.367 NVMe Specification Version (Identify): 1.4 00:07:56.367 Maximum Queue Entries: 2048 00:07:56.367 Contiguous Queues Required: Yes 00:07:56.367 Arbitration Mechanisms Supported 00:07:56.367 Weighted Round Robin: Not Supported 00:07:56.367 Vendor Specific: Not Supported 00:07:56.367 Reset Timeout: 7500 ms 00:07:56.367 Doorbell Stride: 4 bytes 00:07:56.367 NVM Subsystem Reset: Not Supported 00:07:56.367 Command Sets Supported 00:07:56.367 NVM Command Set: Supported 00:07:56.367 Boot Partition: Not Supported 00:07:56.367 Memory Page Size Minimum: 4096 bytes 00:07:56.367 Memory Page Size Maximum: 65536 bytes 00:07:56.367 Persistent Memory Region: Not Supported 00:07:56.367 Optional Asynchronous Events Supported 00:07:56.367 Namespace Attribute Notices: Supported 00:07:56.367 Firmware Activation Notices: Not Supported 00:07:56.367 ANA Change Notices: Not Supported 00:07:56.367 PLE Aggregate Log Change Notices: Not Supported 00:07:56.367 LBA Status Info Alert Notices: Not Supported 00:07:56.367 EGE Aggregate Log Change Notices: Not Supported 00:07:56.367 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.367 Zone Descriptor Change Notices: Not Supported 00:07:56.367 Discovery Log Change Notices: Not Supported 00:07:56.367 Controller Attributes 00:07:56.367 128-bit Host Identifier: Not Supported 00:07:56.367 Non-Operational Permissive Mode: Not Supported 00:07:56.367 NVM Sets: Not Supported 00:07:56.367 Read Recovery Levels: Not Supported 00:07:56.367 Endurance Groups: Not Supported 00:07:56.367 Predictable Latency Mode: Not Supported 00:07:56.367 Traffic Based Keep ALive: Not Supported 00:07:56.367 Namespace Granularity: Not Supported 00:07:56.367 SQ Associations: Not Supported 00:07:56.367 UUID List: Not Supported 00:07:56.367 Multi-Domain Subsystem: Not Supported 00:07:56.367 Fixed Capacity Management: Not Supported 00:07:56.367 Variable Capacity Management: Not Supported 00:07:56.367 Delete Endurance Group: Not Supported 00:07:56.367 Delete NVM Set: Not Supported 00:07:56.367 Extended LBA Formats Supported: Supported 00:07:56.367 Flexible Data Placement Supported: Not Supported 00:07:56.367 00:07:56.367 Controller Memory Buffer Support 00:07:56.367 ================================ 00:07:56.367 Supported: No 00:07:56.367 00:07:56.367 Persistent Memory Region Support 00:07:56.367 ================================ 00:07:56.367 Supported: No 00:07:56.367 00:07:56.367 Admin Command Set Attributes 00:07:56.367 ============================ 00:07:56.367 Security Send/Receive: Not Supported 00:07:56.367 Format NVM: Supported 00:07:56.367 Firmware Activate/Download: Not Supported 00:07:56.367 Namespace Management: Supported 00:07:56.367 Device Self-Test: Not Supported 00:07:56.367 Directives: Supported 00:07:56.367 NVMe-MI: Not Supported 00:07:56.367 Virtualization Management: Not Supported 00:07:56.367 Doorbell Buffer Config: Supported 00:07:56.367 Get LBA Status Capability: Not Supported 00:07:56.367 Command & Feature Lockdown Capability: Not Supported 00:07:56.367 Abort Command Limit: 4 00:07:56.367 Async Event Request Limit: 4 00:07:56.367 Number of Firmware Slots: N/A 00:07:56.367 Firmware Slot 1 Read-Only: N/A 00:07:56.367 Firmware Activation Without Reset: N/A 00:07:56.367 Multiple Update Detection Support: N/A 00:07:56.367 Firmware Update Granularity: No Information Provided 00:07:56.367 Per-Namespace SMART Log: Yes 00:07:56.367 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.367 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:56.367 Command Effects Log Page: Supported 00:07:56.367 Get Log Page Extended Data: Supported 00:07:56.367 Telemetry Log Pages: Not Supported 00:07:56.367 Persistent Event Log Pages: Not Supported 00:07:56.367 Supported Log Pages Log Page: May Support 00:07:56.367 Commands Supported & Effects Log Page: Not Supported 00:07:56.367 Feature Identifiers & Effects Log Page:May Support 00:07:56.367 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.367 Data Area 4 for Telemetry Log: Not Supported 00:07:56.367 Error Log Page Entries Supported: 1 00:07:56.367 Keep Alive: Not Supported 00:07:56.367 00:07:56.367 NVM Command Set Attributes 00:07:56.367 ========================== 00:07:56.367 Submission Queue Entry Size 00:07:56.367 Max: 64 00:07:56.367 Min: 64 00:07:56.367 Completion Queue Entry Size 00:07:56.367 Max: 16 00:07:56.367 Min: 16 00:07:56.367 Number of Namespaces: 256 00:07:56.367 Compare Command: Supported 00:07:56.367 Write Uncorrectable Command: Not Supported 00:07:56.367 Dataset Management Command: Supported 00:07:56.367 Write Zeroes Command: Supported 00:07:56.367 Set Features Save Field: Supported 00:07:56.367 Reservations: Not Supported 00:07:56.367 Timestamp: Supported 00:07:56.367 Copy: Supported 00:07:56.367 Volatile Write Cache: Present 00:07:56.367 Atomic Write Unit (Normal): 1 00:07:56.367 Atomic Write Unit (PFail): 1 00:07:56.367 Atomic Compare & Write Unit: 1 00:07:56.367 Fused Compare & Write: Not Supported 00:07:56.367 Scatter-Gather List 00:07:56.367 SGL Command Set: Supported 00:07:56.367 SGL Keyed: Not Supported 00:07:56.367 SGL Bit Bucket Descriptor: Not Supported 00:07:56.367 SGL Metadata Pointer: Not Supported 00:07:56.367 Oversized SGL: Not Supported 00:07:56.367 SGL Metadata Address: Not Supported 00:07:56.367 SGL Offset: Not Supported 00:07:56.367 Transport SGL Data Block: Not Supported 00:07:56.367 Replay Protected Memory Block: Not Supported 00:07:56.367 00:07:56.367 Firmware Slot Information 00:07:56.367 ========================= 00:07:56.367 Active slot: 1 00:07:56.367 Slot 1 Firmware Revision: 1.0 00:07:56.367 00:07:56.367 00:07:56.367 Commands Supported and Effects 00:07:56.367 ============================== 00:07:56.367 Admin Commands 00:07:56.367 -------------- 00:07:56.367 Delete I/O Submission Queue (00h): Supported 00:07:56.367 Create I/O Submission Queue (01h): Supported 00:07:56.367 Get Log Page (02h): Supported 00:07:56.367 Delete I/O Completion Queue (04h): Supported 00:07:56.367 Create I/O Completion Queue (05h): Supported 00:07:56.367 Identify (06h): Supported 00:07:56.367 Abort (08h): Supported 00:07:56.367 Set Features (09h): Supported 00:07:56.367 Get Features (0Ah): Supported 00:07:56.367 Asynchronous Event Request (0Ch): Supported 00:07:56.367 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.367 Directive Send (19h): Supported 00:07:56.367 Directive Receive (1Ah): Supported 00:07:56.367 Virtualization Management (1Ch): Supported 00:07:56.367 Doorbell Buffer Config (7Ch): Supported 00:07:56.367 Format NVM (80h): Supported LBA-Change 00:07:56.367 I/O Commands 00:07:56.367 ------------ 00:07:56.367 Flush (00h): Supported LBA-Change 00:07:56.367 Write (01h): Supported LBA-Change 00:07:56.367 Read (02h): Supported 00:07:56.367 Compare (05h): Supported 00:07:56.367 Write Zeroes (08h): Supported LBA-Change 00:07:56.367 Dataset Management (09h): Supported LBA-Change 00:07:56.367 Unknown (0Ch): Supported 00:07:56.367 Unknown (12h): Supported 00:07:56.367 Copy (19h): Supported LBA-Change 00:07:56.367 Unknown (1Dh): Supported LBA-Change 00:07:56.367 00:07:56.367 Error Log 00:07:56.367 ========= 00:07:56.367 00:07:56.367 Arbitration 00:07:56.367 =========== 00:07:56.367 Arbitration Burst: no limit 00:07:56.367 00:07:56.367 Power Management 00:07:56.368 ================ 00:07:56.368 Number of Power States: 1 00:07:56.368 Current Power State: Power State #0 00:07:56.368 Power State #0: 00:07:56.368 Max Power: 25.00 W 00:07:56.368 Non-Operational State: Operational 00:07:56.368 Entry Latency: 16 microseconds 00:07:56.368 Exit Latency: 4 microseconds 00:07:56.368 Relative Read Throughput: 0 00:07:56.368 Relative Read Latency: 0 00:07:56.368 Relative Write Throughput: 0 00:07:56.368 Relative Write Latency: 0 00:07:56.368 Idle Power: Not Reported 00:07:56.368 Active Power: Not Reported 00:07:56.368 Non-Operational Permissive Mode: Not Supported 00:07:56.368 00:07:56.368 Health Information 00:07:56.368 ================== 00:07:56.368 Critical Warnings: 00:07:56.368 Available Spare Space: OK 00:07:56.368 Temperature: OK 00:07:56.368 Device Reliability: OK 00:07:56.368 Read Only: No 00:07:56.368 Volatile Memory Backup: OK 00:07:56.368 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.368 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.368 Available Spare: 0% 00:07:56.368 Available Spare Threshold: 0% 00:07:56.368 Life Percentage Used: 0% 00:07:56.368 Data Units Read: 2230 00:07:56.368 Data Units Written: 2017 00:07:56.368 Host Read Commands: 113299 00:07:56.368 Host Write Commands: 111568 00:07:56.368 Controller Busy Time: 0 minutes 00:07:56.368 Power Cycles: 0 00:07:56.368 Power On Hours: 0 hours 00:07:56.368 Unsafe Shutdowns: 0 00:07:56.368 Unrecoverable Media Errors: 0 00:07:56.368 Lifetime Error Log Entries: 0 00:07:56.368 Warning Temperature Time: 0 minutes 00:07:56.368 Critical Temperature Time: 0 minutes 00:07:56.368 00:07:56.368 Number of Queues 00:07:56.368 ================ 00:07:56.368 Number of I/O Submission Queues: 64 00:07:56.368 Number of I/O Completion Queues: 64 00:07:56.368 00:07:56.368 ZNS Specific Controller Data 00:07:56.368 ============================ 00:07:56.368 Zone Append Size Limit: 0 00:07:56.368 00:07:56.368 00:07:56.368 Active Namespaces 00:07:56.368 ================= 00:07:56.368 Namespace ID:1 00:07:56.368 Error Recovery Timeout: Unlimited 00:07:56.368 Command Set Identifier: NVM (00h) 00:07:56.368 Deallocate: Supported 00:07:56.368 Deallocated/Unwritten Error: Supported 00:07:56.368 Deallocated Read Value: All 0x00 00:07:56.368 Deallocate in Write Zeroes: Not Supported 00:07:56.368 Deallocated Guard Field: 0xFFFF 00:07:56.368 Flush: Supported 00:07:56.368 Reservation: Not Supported 00:07:56.368 Namespace Sharing Capabilities: Private 00:07:56.368 Size (in LBAs): 1048576 (4GiB) 00:07:56.368 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.368 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.368 Thin Provisioning: Not Supported 00:07:56.368 Per-NS Atomic Units: No 00:07:56.368 Maximum Single Source Range Length: 128 00:07:56.368 Maximum Copy Length: 128 00:07:56.368 Maximum Source Range Count: 128 00:07:56.368 NGUID/EUI64 Never Reused: No 00:07:56.368 Namespace Write Protected: No 00:07:56.368 Number of LBA Formats: 8 00:07:56.368 Current LBA Format: LBA Format #04 00:07:56.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.368 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.368 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.368 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.368 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.368 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.368 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.368 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.368 00:07:56.368 NVM Specific Namespace Data 00:07:56.368 =========================== 00:07:56.368 Logical Block Storage Tag Mask: 0 00:07:56.368 Protection Information Capabilities: 00:07:56.368 16b Guard Protection Information Storage Tag Support: No 00:07:56.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.368 Storage Tag Check Read Support: No 00:07:56.368 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Namespace ID:2 00:07:56.368 Error Recovery Timeout: Unlimited 00:07:56.368 Command Set Identifier: NVM (00h) 00:07:56.368 Deallocate: Supported 00:07:56.368 Deallocated/Unwritten Error: Supported 00:07:56.368 Deallocated Read Value: All 0x00 00:07:56.368 Deallocate in Write Zeroes: Not Supported 00:07:56.368 Deallocated Guard Field: 0xFFFF 00:07:56.368 Flush: Supported 00:07:56.368 Reservation: Not Supported 00:07:56.368 Namespace Sharing Capabilities: Private 00:07:56.368 Size (in LBAs): 1048576 (4GiB) 00:07:56.368 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.368 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.368 Thin Provisioning: Not Supported 00:07:56.368 Per-NS Atomic Units: No 00:07:56.368 Maximum Single Source Range Length: 128 00:07:56.368 Maximum Copy Length: 128 00:07:56.368 Maximum Source Range Count: 128 00:07:56.368 NGUID/EUI64 Never Reused: No 00:07:56.368 Namespace Write Protected: No 00:07:56.368 Number of LBA Formats: 8 00:07:56.368 Current LBA Format: LBA Format #04 00:07:56.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.368 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.368 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.368 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.368 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.368 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.368 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.368 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.368 00:07:56.368 NVM Specific Namespace Data 00:07:56.368 =========================== 00:07:56.368 Logical Block Storage Tag Mask: 0 00:07:56.368 Protection Information Capabilities: 00:07:56.368 16b Guard Protection Information Storage Tag Support: No 00:07:56.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.368 Storage Tag Check Read Support: No 00:07:56.368 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.368 Namespace ID:3 00:07:56.368 Error Recovery Timeout: Unlimited 00:07:56.368 Command Set Identifier: NVM (00h) 00:07:56.368 Deallocate: Supported 00:07:56.368 Deallocated/Unwritten Error: Supported 00:07:56.368 Deallocated Read Value: All 0x00 00:07:56.368 Deallocate in Write Zeroes: Not Supported 00:07:56.368 Deallocated Guard Field: 0xFFFF 00:07:56.368 Flush: Supported 00:07:56.368 Reservation: Not Supported 00:07:56.368 Namespace Sharing Capabilities: Private 00:07:56.368 Size (in LBAs): 1048576 (4GiB) 00:07:56.368 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.368 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.368 Thin Provisioning: Not Supported 00:07:56.368 Per-NS Atomic Units: No 00:07:56.368 Maximum Single Source Range Length: 128 00:07:56.368 Maximum Copy Length: 128 00:07:56.368 Maximum Source Range Count: 128 00:07:56.368 NGUID/EUI64 Never Reused: No 00:07:56.368 Namespace Write Protected: No 00:07:56.368 Number of LBA Formats: 8 00:07:56.368 Current LBA Format: LBA Format #04 00:07:56.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.368 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.368 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.368 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.368 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.368 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.368 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.368 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.368 00:07:56.368 NVM Specific Namespace Data 00:07:56.368 =========================== 00:07:56.368 Logical Block Storage Tag Mask: 0 00:07:56.368 Protection Information Capabilities: 00:07:56.368 16b Guard Protection Information Storage Tag Support: No 00:07:56.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.369 Storage Tag Check Read Support: No 00:07:56.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.369 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.369 03:07:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:56.630 ===================================================== 00:07:56.630 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.630 ===================================================== 00:07:56.630 Controller Capabilities/Features 00:07:56.630 ================================ 00:07:56.630 Vendor ID: 1b36 00:07:56.630 Subsystem Vendor ID: 1af4 00:07:56.630 Serial Number: 12343 00:07:56.630 Model Number: QEMU NVMe Ctrl 00:07:56.630 Firmware Version: 8.0.0 00:07:56.630 Recommended Arb Burst: 6 00:07:56.630 IEEE OUI Identifier: 00 54 52 00:07:56.630 Multi-path I/O 00:07:56.630 May have multiple subsystem ports: No 00:07:56.630 May have multiple controllers: Yes 00:07:56.630 Associated with SR-IOV VF: No 00:07:56.630 Max Data Transfer Size: 524288 00:07:56.630 Max Number of Namespaces: 256 00:07:56.630 Max Number of I/O Queues: 64 00:07:56.630 NVMe Specification Version (VS): 1.4 00:07:56.630 NVMe Specification Version (Identify): 1.4 00:07:56.630 Maximum Queue Entries: 2048 00:07:56.630 Contiguous Queues Required: Yes 00:07:56.630 Arbitration Mechanisms Supported 00:07:56.630 Weighted Round Robin: Not Supported 00:07:56.630 Vendor Specific: Not Supported 00:07:56.630 Reset Timeout: 7500 ms 00:07:56.630 Doorbell Stride: 4 bytes 00:07:56.630 NVM Subsystem Reset: Not Supported 00:07:56.630 Command Sets Supported 00:07:56.630 NVM Command Set: Supported 00:07:56.630 Boot Partition: Not Supported 00:07:56.630 Memory Page Size Minimum: 4096 bytes 00:07:56.630 Memory Page Size Maximum: 65536 bytes 00:07:56.630 Persistent Memory Region: Not Supported 00:07:56.630 Optional Asynchronous Events Supported 00:07:56.630 Namespace Attribute Notices: Supported 00:07:56.630 Firmware Activation Notices: Not Supported 00:07:56.630 ANA Change Notices: Not Supported 00:07:56.630 PLE Aggregate Log Change Notices: Not Supported 00:07:56.630 LBA Status Info Alert Notices: Not Supported 00:07:56.630 EGE Aggregate Log Change Notices: Not Supported 00:07:56.630 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.630 Zone Descriptor Change Notices: Not Supported 00:07:56.630 Discovery Log Change Notices: Not Supported 00:07:56.630 Controller Attributes 00:07:56.630 128-bit Host Identifier: Not Supported 00:07:56.630 Non-Operational Permissive Mode: Not Supported 00:07:56.630 NVM Sets: Not Supported 00:07:56.630 Read Recovery Levels: Not Supported 00:07:56.630 Endurance Groups: Supported 00:07:56.630 Predictable Latency Mode: Not Supported 00:07:56.630 Traffic Based Keep ALive: Not Supported 00:07:56.630 Namespace Granularity: Not Supported 00:07:56.630 SQ Associations: Not Supported 00:07:56.630 UUID List: Not Supported 00:07:56.630 Multi-Domain Subsystem: Not Supported 00:07:56.630 Fixed Capacity Management: Not Supported 00:07:56.630 Variable Capacity Management: Not Supported 00:07:56.630 Delete Endurance Group: Not Supported 00:07:56.630 Delete NVM Set: Not Supported 00:07:56.630 Extended LBA Formats Supported: Supported 00:07:56.630 Flexible Data Placement Supported: Supported 00:07:56.630 00:07:56.630 Controller Memory Buffer Support 00:07:56.630 ================================ 00:07:56.630 Supported: No 00:07:56.630 00:07:56.630 Persistent Memory Region Support 00:07:56.630 ================================ 00:07:56.630 Supported: No 00:07:56.630 00:07:56.630 Admin Command Set Attributes 00:07:56.630 ============================ 00:07:56.630 Security Send/Receive: Not Supported 00:07:56.630 Format NVM: Supported 00:07:56.630 Firmware Activate/Download: Not Supported 00:07:56.630 Namespace Management: Supported 00:07:56.630 Device Self-Test: Not Supported 00:07:56.630 Directives: Supported 00:07:56.630 NVMe-MI: Not Supported 00:07:56.630 Virtualization Management: Not Supported 00:07:56.630 Doorbell Buffer Config: Supported 00:07:56.630 Get LBA Status Capability: Not Supported 00:07:56.630 Command & Feature Lockdown Capability: Not Supported 00:07:56.630 Abort Command Limit: 4 00:07:56.630 Async Event Request Limit: 4 00:07:56.630 Number of Firmware Slots: N/A 00:07:56.630 Firmware Slot 1 Read-Only: N/A 00:07:56.630 Firmware Activation Without Reset: N/A 00:07:56.630 Multiple Update Detection Support: N/A 00:07:56.630 Firmware Update Granularity: No Information Provided 00:07:56.630 Per-Namespace SMART Log: Yes 00:07:56.630 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.630 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:56.630 Command Effects Log Page: Supported 00:07:56.630 Get Log Page Extended Data: Supported 00:07:56.630 Telemetry Log Pages: Not Supported 00:07:56.630 Persistent Event Log Pages: Not Supported 00:07:56.630 Supported Log Pages Log Page: May Support 00:07:56.630 Commands Supported & Effects Log Page: Not Supported 00:07:56.630 Feature Identifiers & Effects Log Page:May Support 00:07:56.630 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.630 Data Area 4 for Telemetry Log: Not Supported 00:07:56.630 Error Log Page Entries Supported: 1 00:07:56.630 Keep Alive: Not Supported 00:07:56.630 00:07:56.630 NVM Command Set Attributes 00:07:56.630 ========================== 00:07:56.630 Submission Queue Entry Size 00:07:56.630 Max: 64 00:07:56.630 Min: 64 00:07:56.630 Completion Queue Entry Size 00:07:56.630 Max: 16 00:07:56.630 Min: 16 00:07:56.630 Number of Namespaces: 256 00:07:56.630 Compare Command: Supported 00:07:56.630 Write Uncorrectable Command: Not Supported 00:07:56.630 Dataset Management Command: Supported 00:07:56.630 Write Zeroes Command: Supported 00:07:56.630 Set Features Save Field: Supported 00:07:56.630 Reservations: Not Supported 00:07:56.630 Timestamp: Supported 00:07:56.630 Copy: Supported 00:07:56.630 Volatile Write Cache: Present 00:07:56.630 Atomic Write Unit (Normal): 1 00:07:56.630 Atomic Write Unit (PFail): 1 00:07:56.630 Atomic Compare & Write Unit: 1 00:07:56.630 Fused Compare & Write: Not Supported 00:07:56.630 Scatter-Gather List 00:07:56.630 SGL Command Set: Supported 00:07:56.630 SGL Keyed: Not Supported 00:07:56.630 SGL Bit Bucket Descriptor: Not Supported 00:07:56.630 SGL Metadata Pointer: Not Supported 00:07:56.630 Oversized SGL: Not Supported 00:07:56.631 SGL Metadata Address: Not Supported 00:07:56.631 SGL Offset: Not Supported 00:07:56.631 Transport SGL Data Block: Not Supported 00:07:56.631 Replay Protected Memory Block: Not Supported 00:07:56.631 00:07:56.631 Firmware Slot Information 00:07:56.631 ========================= 00:07:56.631 Active slot: 1 00:07:56.631 Slot 1 Firmware Revision: 1.0 00:07:56.631 00:07:56.631 00:07:56.631 Commands Supported and Effects 00:07:56.631 ============================== 00:07:56.631 Admin Commands 00:07:56.631 -------------- 00:07:56.631 Delete I/O Submission Queue (00h): Supported 00:07:56.631 Create I/O Submission Queue (01h): Supported 00:07:56.631 Get Log Page (02h): Supported 00:07:56.631 Delete I/O Completion Queue (04h): Supported 00:07:56.631 Create I/O Completion Queue (05h): Supported 00:07:56.631 Identify (06h): Supported 00:07:56.631 Abort (08h): Supported 00:07:56.631 Set Features (09h): Supported 00:07:56.631 Get Features (0Ah): Supported 00:07:56.631 Asynchronous Event Request (0Ch): Supported 00:07:56.631 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.631 Directive Send (19h): Supported 00:07:56.631 Directive Receive (1Ah): Supported 00:07:56.631 Virtualization Management (1Ch): Supported 00:07:56.631 Doorbell Buffer Config (7Ch): Supported 00:07:56.631 Format NVM (80h): Supported LBA-Change 00:07:56.631 I/O Commands 00:07:56.631 ------------ 00:07:56.631 Flush (00h): Supported LBA-Change 00:07:56.631 Write (01h): Supported LBA-Change 00:07:56.631 Read (02h): Supported 00:07:56.631 Compare (05h): Supported 00:07:56.631 Write Zeroes (08h): Supported LBA-Change 00:07:56.631 Dataset Management (09h): Supported LBA-Change 00:07:56.631 Unknown (0Ch): Supported 00:07:56.631 Unknown (12h): Supported 00:07:56.631 Copy (19h): Supported LBA-Change 00:07:56.631 Unknown (1Dh): Supported LBA-Change 00:07:56.631 00:07:56.631 Error Log 00:07:56.631 ========= 00:07:56.631 00:07:56.631 Arbitration 00:07:56.631 =========== 00:07:56.631 Arbitration Burst: no limit 00:07:56.631 00:07:56.631 Power Management 00:07:56.631 ================ 00:07:56.631 Number of Power States: 1 00:07:56.631 Current Power State: Power State #0 00:07:56.631 Power State #0: 00:07:56.631 Max Power: 25.00 W 00:07:56.631 Non-Operational State: Operational 00:07:56.631 Entry Latency: 16 microseconds 00:07:56.631 Exit Latency: 4 microseconds 00:07:56.631 Relative Read Throughput: 0 00:07:56.631 Relative Read Latency: 0 00:07:56.631 Relative Write Throughput: 0 00:07:56.631 Relative Write Latency: 0 00:07:56.631 Idle Power: Not Reported 00:07:56.631 Active Power: Not Reported 00:07:56.631 Non-Operational Permissive Mode: Not Supported 00:07:56.631 00:07:56.631 Health Information 00:07:56.631 ================== 00:07:56.631 Critical Warnings: 00:07:56.631 Available Spare Space: OK 00:07:56.631 Temperature: OK 00:07:56.631 Device Reliability: OK 00:07:56.631 Read Only: No 00:07:56.631 Volatile Memory Backup: OK 00:07:56.631 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.631 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.631 Available Spare: 0% 00:07:56.631 Available Spare Threshold: 0% 00:07:56.631 Life Percentage Used: 0% 00:07:56.631 Data Units Read: 830 00:07:56.631 Data Units Written: 759 00:07:56.631 Host Read Commands: 38542 00:07:56.631 Host Write Commands: 37965 00:07:56.631 Controller Busy Time: 0 minutes 00:07:56.631 Power Cycles: 0 00:07:56.631 Power On Hours: 0 hours 00:07:56.631 Unsafe Shutdowns: 0 00:07:56.631 Unrecoverable Media Errors: 0 00:07:56.631 Lifetime Error Log Entries: 0 00:07:56.631 Warning Temperature Time: 0 minutes 00:07:56.631 Critical Temperature Time: 0 minutes 00:07:56.631 00:07:56.631 Number of Queues 00:07:56.631 ================ 00:07:56.631 Number of I/O Submission Queues: 64 00:07:56.631 Number of I/O Completion Queues: 64 00:07:56.631 00:07:56.631 ZNS Specific Controller Data 00:07:56.631 ============================ 00:07:56.631 Zone Append Size Limit: 0 00:07:56.631 00:07:56.631 00:07:56.631 Active Namespaces 00:07:56.631 ================= 00:07:56.631 Namespace ID:1 00:07:56.631 Error Recovery Timeout: Unlimited 00:07:56.631 Command Set Identifier: NVM (00h) 00:07:56.631 Deallocate: Supported 00:07:56.631 Deallocated/Unwritten Error: Supported 00:07:56.631 Deallocated Read Value: All 0x00 00:07:56.631 Deallocate in Write Zeroes: Not Supported 00:07:56.631 Deallocated Guard Field: 0xFFFF 00:07:56.631 Flush: Supported 00:07:56.631 Reservation: Not Supported 00:07:56.631 Namespace Sharing Capabilities: Multiple Controllers 00:07:56.631 Size (in LBAs): 262144 (1GiB) 00:07:56.631 Capacity (in LBAs): 262144 (1GiB) 00:07:56.631 Utilization (in LBAs): 262144 (1GiB) 00:07:56.631 Thin Provisioning: Not Supported 00:07:56.631 Per-NS Atomic Units: No 00:07:56.631 Maximum Single Source Range Length: 128 00:07:56.631 Maximum Copy Length: 128 00:07:56.631 Maximum Source Range Count: 128 00:07:56.631 NGUID/EUI64 Never Reused: No 00:07:56.631 Namespace Write Protected: No 00:07:56.631 Endurance group ID: 1 00:07:56.631 Number of LBA Formats: 8 00:07:56.631 Current LBA Format: LBA Format #04 00:07:56.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.631 00:07:56.631 Get Feature FDP: 00:07:56.631 ================ 00:07:56.631 Enabled: Yes 00:07:56.631 FDP configuration index: 0 00:07:56.631 00:07:56.631 FDP configurations log page 00:07:56.631 =========================== 00:07:56.631 Number of FDP configurations: 1 00:07:56.631 Version: 0 00:07:56.631 Size: 112 00:07:56.631 FDP Configuration Descriptor: 0 00:07:56.631 Descriptor Size: 96 00:07:56.631 Reclaim Group Identifier format: 2 00:07:56.631 FDP Volatile Write Cache: Not Present 00:07:56.631 FDP Configuration: Valid 00:07:56.631 Vendor Specific Size: 0 00:07:56.631 Number of Reclaim Groups: 2 00:07:56.631 Number of Recalim Unit Handles: 8 00:07:56.631 Max Placement Identifiers: 128 00:07:56.631 Number of Namespaces Suppprted: 256 00:07:56.631 Reclaim unit Nominal Size: 6000000 bytes 00:07:56.631 Estimated Reclaim Unit Time Limit: Not Reported 00:07:56.631 RUH Desc #000: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #001: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #002: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #003: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #004: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #005: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #006: RUH Type: Initially Isolated 00:07:56.631 RUH Desc #007: RUH Type: Initially Isolated 00:07:56.631 00:07:56.631 FDP reclaim unit handle usage log page 00:07:56.631 ====================================== 00:07:56.631 Number of Reclaim Unit Handles: 8 00:07:56.631 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:56.631 RUH Usage Desc #001: RUH Attributes: Unused 00:07:56.631 RUH Usage Desc #002: RUH Attributes: Unused 00:07:56.631 RUH Usage Desc #003: RUH Attributes: Unused 00:07:56.631 RUH Usage Desc #004: RUH Attributes: Unused 00:07:56.631 RUH Usage Desc #005: RUH Attributes: Unused 00:07:56.631 RUH Usage Desc #006: RUH Attributes: Unused 00:07:56.631 RUH Usage Desc #007: RUH Attributes: Unused 00:07:56.631 00:07:56.631 FDP statistics log page 00:07:56.631 ======================= 00:07:56.631 Host bytes with metadata written: 481206272 00:07:56.631 Media bytes with metadata written: 481259520 00:07:56.631 Media bytes erased: 0 00:07:56.631 00:07:56.631 FDP events log page 00:07:56.631 =================== 00:07:56.631 Number of FDP events: 0 00:07:56.631 00:07:56.631 NVM Specific Namespace Data 00:07:56.631 =========================== 00:07:56.631 Logical Block Storage Tag Mask: 0 00:07:56.631 Protection Information Capabilities: 00:07:56.631 16b Guard Protection Information Storage Tag Support: No 00:07:56.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.631 Storage Tag Check Read Support: No 00:07:56.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.632 00:07:56.632 real 0m1.111s 00:07:56.632 user 0m0.358s 00:07:56.632 sys 0m0.529s 00:07:56.632 03:07:59 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.632 ************************************ 00:07:56.632 03:07:59 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:56.632 END TEST nvme_identify 00:07:56.632 ************************************ 00:07:56.632 03:08:00 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:56.632 03:08:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.632 03:08:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.632 03:08:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.632 ************************************ 00:07:56.632 START TEST nvme_perf 00:07:56.632 ************************************ 00:07:56.632 03:08:00 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:56.632 03:08:00 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:58.042 Initializing NVMe Controllers 00:07:58.042 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.042 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.042 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.042 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.042 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:58.042 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:58.042 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:58.042 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:58.042 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:58.042 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:58.042 Initialization complete. Launching workers. 00:07:58.042 ======================================================== 00:07:58.042 Latency(us) 00:07:58.042 Device Information : IOPS MiB/s Average min max 00:07:58.042 PCIE (0000:00:11.0) NSID 1 from core 0: 9596.75 112.46 13355.59 9222.22 39989.60 00:07:58.042 PCIE (0000:00:13.0) NSID 1 from core 0: 9596.75 112.46 13347.27 9163.64 40578.53 00:07:58.042 PCIE (0000:00:10.0) NSID 1 from core 0: 9596.75 112.46 13334.72 8633.31 41205.27 00:07:58.042 PCIE (0000:00:12.0) NSID 1 from core 0: 9596.75 112.46 13324.23 8423.71 41452.35 00:07:58.042 PCIE (0000:00:12.0) NSID 2 from core 0: 9596.75 112.46 13312.09 6998.51 43412.55 00:07:58.042 PCIE (0000:00:12.0) NSID 3 from core 0: 9660.73 113.21 13212.27 6507.20 30689.58 00:07:58.042 ======================================================== 00:07:58.042 Total : 57644.46 675.52 13314.25 6507.20 43412.55 00:07:58.042 00:07:58.042 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.042 ================================================================================= 00:07:58.042 1.00000% : 9628.751us 00:07:58.042 10.00000% : 10132.874us 00:07:58.042 25.00000% : 10838.646us 00:07:58.042 50.00000% : 13510.498us 00:07:58.042 75.00000% : 14922.043us 00:07:58.042 90.00000% : 16232.763us 00:07:58.042 95.00000% : 17140.185us 00:07:58.042 98.00000% : 20064.098us 00:07:58.042 99.00000% : 28029.243us 00:07:58.042 99.50000% : 39119.951us 00:07:58.042 99.90000% : 39926.548us 00:07:58.042 99.99000% : 40128.197us 00:07:58.042 99.99900% : 40128.197us 00:07:58.042 99.99990% : 40128.197us 00:07:58.042 99.99999% : 40128.197us 00:07:58.042 00:07:58.042 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.042 ================================================================================= 00:07:58.042 1.00000% : 9578.338us 00:07:58.042 10.00000% : 10132.874us 00:07:58.042 25.00000% : 10838.646us 00:07:58.042 50.00000% : 13510.498us 00:07:58.042 75.00000% : 14922.043us 00:07:58.042 90.00000% : 16131.938us 00:07:58.042 95.00000% : 17241.009us 00:07:58.042 98.00000% : 19559.975us 00:07:58.042 99.00000% : 27827.594us 00:07:58.042 99.50000% : 39724.898us 00:07:58.042 99.90000% : 40531.495us 00:07:58.042 99.99000% : 40733.145us 00:07:58.042 99.99900% : 40733.145us 00:07:58.042 99.99990% : 40733.145us 00:07:58.042 99.99999% : 40733.145us 00:07:58.042 00:07:58.042 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.042 ================================================================================= 00:07:58.042 1.00000% : 9427.102us 00:07:58.042 10.00000% : 10132.874us 00:07:58.042 25.00000% : 10838.646us 00:07:58.042 50.00000% : 13409.674us 00:07:58.042 75.00000% : 14922.043us 00:07:58.042 90.00000% : 16232.763us 00:07:58.042 95.00000% : 17341.834us 00:07:58.042 98.00000% : 18955.028us 00:07:58.042 99.00000% : 27625.945us 00:07:58.042 99.50000% : 40128.197us 00:07:58.042 99.90000% : 41136.443us 00:07:58.042 99.99000% : 41338.092us 00:07:58.042 99.99900% : 41338.092us 00:07:58.042 99.99990% : 41338.092us 00:07:58.042 99.99999% : 41338.092us 00:07:58.042 00:07:58.042 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.042 ================================================================================= 00:07:58.042 1.00000% : 9427.102us 00:07:58.042 10.00000% : 10132.874us 00:07:58.042 25.00000% : 10788.234us 00:07:58.042 50.00000% : 13409.674us 00:07:58.042 75.00000% : 14922.043us 00:07:58.042 90.00000% : 16232.763us 00:07:58.042 95.00000% : 17241.009us 00:07:58.042 98.00000% : 18955.028us 00:07:58.042 99.00000% : 28029.243us 00:07:58.042 99.50000% : 40531.495us 00:07:58.042 99.90000% : 41338.092us 00:07:58.042 99.99000% : 41539.742us 00:07:58.042 99.99900% : 41539.742us 00:07:58.042 99.99990% : 41539.742us 00:07:58.042 99.99999% : 41539.742us 00:07:58.042 00:07:58.042 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.043 ================================================================================= 00:07:58.043 1.00000% : 9376.689us 00:07:58.043 10.00000% : 10132.874us 00:07:58.043 25.00000% : 10788.234us 00:07:58.043 50.00000% : 13308.849us 00:07:58.043 75.00000% : 14922.043us 00:07:58.043 90.00000% : 16131.938us 00:07:58.043 95.00000% : 16938.535us 00:07:58.043 98.00000% : 19459.151us 00:07:58.043 99.00000% : 29037.489us 00:07:58.043 99.50000% : 42346.338us 00:07:58.043 99.90000% : 43354.585us 00:07:58.043 99.99000% : 43556.234us 00:07:58.043 99.99900% : 43556.234us 00:07:58.043 99.99990% : 43556.234us 00:07:58.043 99.99999% : 43556.234us 00:07:58.043 00:07:58.043 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.043 ================================================================================= 00:07:58.043 1.00000% : 9477.514us 00:07:58.043 10.00000% : 10132.874us 00:07:58.043 25.00000% : 10788.234us 00:07:58.043 50.00000% : 13409.674us 00:07:58.043 75.00000% : 14922.043us 00:07:58.043 90.00000% : 16232.763us 00:07:58.043 95.00000% : 17039.360us 00:07:58.043 98.00000% : 20568.222us 00:07:58.043 99.00000% : 21475.643us 00:07:58.043 99.50000% : 29844.086us 00:07:58.043 99.90000% : 30650.683us 00:07:58.043 99.99000% : 30852.332us 00:07:58.043 99.99900% : 30852.332us 00:07:58.043 99.99990% : 30852.332us 00:07:58.043 99.99999% : 30852.332us 00:07:58.043 00:07:58.043 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.043 ============================================================================== 00:07:58.043 Range in us Cumulative IO count 00:07:58.043 9175.040 - 9225.452: 0.0208% ( 2) 00:07:58.043 9225.452 - 9275.865: 0.0521% ( 3) 00:07:58.043 9275.865 - 9326.277: 0.1771% ( 12) 00:07:58.043 9326.277 - 9376.689: 0.2917% ( 11) 00:07:58.043 9376.689 - 9427.102: 0.3438% ( 5) 00:07:58.043 9427.102 - 9477.514: 0.4375% ( 9) 00:07:58.043 9477.514 - 9527.926: 0.6667% ( 22) 00:07:58.043 9527.926 - 9578.338: 0.9896% ( 31) 00:07:58.043 9578.338 - 9628.751: 1.3125% ( 31) 00:07:58.043 9628.751 - 9679.163: 1.7396% ( 41) 00:07:58.043 9679.163 - 9729.575: 2.3229% ( 56) 00:07:58.043 9729.575 - 9779.988: 3.3021% ( 94) 00:07:58.043 9779.988 - 9830.400: 4.1042% ( 77) 00:07:58.043 9830.400 - 9880.812: 4.9062% ( 77) 00:07:58.043 9880.812 - 9931.225: 5.8438% ( 90) 00:07:58.043 9931.225 - 9981.637: 6.7917% ( 91) 00:07:58.043 9981.637 - 10032.049: 7.9167% ( 108) 00:07:58.043 10032.049 - 10082.462: 9.0625% ( 110) 00:07:58.043 10082.462 - 10132.874: 10.2083% ( 110) 00:07:58.043 10132.874 - 10183.286: 11.4375% ( 118) 00:07:58.043 10183.286 - 10233.698: 12.6250% ( 114) 00:07:58.043 10233.698 - 10284.111: 13.8542% ( 118) 00:07:58.043 10284.111 - 10334.523: 15.2812% ( 137) 00:07:58.043 10334.523 - 10384.935: 16.5521% ( 122) 00:07:58.043 10384.935 - 10435.348: 17.7188% ( 112) 00:07:58.043 10435.348 - 10485.760: 18.9583% ( 119) 00:07:58.043 10485.760 - 10536.172: 20.0104% ( 101) 00:07:58.043 10536.172 - 10586.585: 20.9583% ( 91) 00:07:58.043 10586.585 - 10636.997: 21.9479% ( 95) 00:07:58.043 10636.997 - 10687.409: 22.8854% ( 90) 00:07:58.043 10687.409 - 10737.822: 23.7500% ( 83) 00:07:58.043 10737.822 - 10788.234: 24.5417% ( 76) 00:07:58.043 10788.234 - 10838.646: 25.3750% ( 80) 00:07:58.043 10838.646 - 10889.058: 26.1250% ( 72) 00:07:58.043 10889.058 - 10939.471: 27.0208% ( 86) 00:07:58.043 10939.471 - 10989.883: 27.8333% ( 78) 00:07:58.043 10989.883 - 11040.295: 28.7396% ( 87) 00:07:58.043 11040.295 - 11090.708: 29.6458% ( 87) 00:07:58.043 11090.708 - 11141.120: 30.4583% ( 78) 00:07:58.043 11141.120 - 11191.532: 31.3021% ( 81) 00:07:58.043 11191.532 - 11241.945: 31.9688% ( 64) 00:07:58.043 11241.945 - 11292.357: 32.6042% ( 61) 00:07:58.043 11292.357 - 11342.769: 33.2708% ( 64) 00:07:58.043 11342.769 - 11393.182: 33.8854% ( 59) 00:07:58.043 11393.182 - 11443.594: 34.6042% ( 69) 00:07:58.043 11443.594 - 11494.006: 35.2292% ( 60) 00:07:58.043 11494.006 - 11544.418: 35.8125% ( 56) 00:07:58.043 11544.418 - 11594.831: 36.4167% ( 58) 00:07:58.043 11594.831 - 11645.243: 37.0833% ( 64) 00:07:58.043 11645.243 - 11695.655: 37.6875% ( 58) 00:07:58.043 11695.655 - 11746.068: 38.2188% ( 51) 00:07:58.043 11746.068 - 11796.480: 38.7292% ( 49) 00:07:58.043 11796.480 - 11846.892: 39.2396% ( 49) 00:07:58.043 11846.892 - 11897.305: 39.6562% ( 40) 00:07:58.043 11897.305 - 11947.717: 39.9375% ( 27) 00:07:58.043 11947.717 - 11998.129: 40.3021% ( 35) 00:07:58.043 11998.129 - 12048.542: 40.6771% ( 36) 00:07:58.043 12048.542 - 12098.954: 41.0208% ( 33) 00:07:58.043 12098.954 - 12149.366: 41.3646% ( 33) 00:07:58.043 12149.366 - 12199.778: 41.6458% ( 27) 00:07:58.043 12199.778 - 12250.191: 41.8854% ( 23) 00:07:58.043 12250.191 - 12300.603: 42.1146% ( 22) 00:07:58.043 12300.603 - 12351.015: 42.3021% ( 18) 00:07:58.043 12351.015 - 12401.428: 42.5104% ( 20) 00:07:58.043 12401.428 - 12451.840: 42.7292% ( 21) 00:07:58.043 12451.840 - 12502.252: 42.8958% ( 16) 00:07:58.043 12502.252 - 12552.665: 43.0312% ( 13) 00:07:58.043 12552.665 - 12603.077: 43.1979% ( 16) 00:07:58.043 12603.077 - 12653.489: 43.4167% ( 21) 00:07:58.043 12653.489 - 12703.902: 43.6146% ( 19) 00:07:58.043 12703.902 - 12754.314: 43.8750% ( 25) 00:07:58.043 12754.314 - 12804.726: 44.0833% ( 20) 00:07:58.043 12804.726 - 12855.138: 44.2604% ( 17) 00:07:58.043 12855.138 - 12905.551: 44.4896% ( 22) 00:07:58.043 12905.551 - 13006.375: 45.4479% ( 92) 00:07:58.043 13006.375 - 13107.200: 46.2812% ( 80) 00:07:58.043 13107.200 - 13208.025: 47.2396% ( 92) 00:07:58.043 13208.025 - 13308.849: 48.4375% ( 115) 00:07:58.043 13308.849 - 13409.674: 49.7083% ( 122) 00:07:58.043 13409.674 - 13510.498: 51.0521% ( 129) 00:07:58.043 13510.498 - 13611.323: 52.7917% ( 167) 00:07:58.043 13611.323 - 13712.148: 54.5521% ( 169) 00:07:58.043 13712.148 - 13812.972: 56.5312% ( 190) 00:07:58.043 13812.972 - 13913.797: 58.4375% ( 183) 00:07:58.043 13913.797 - 14014.622: 60.2188% ( 171) 00:07:58.043 14014.622 - 14115.446: 62.1667% ( 187) 00:07:58.043 14115.446 - 14216.271: 64.1979% ( 195) 00:07:58.043 14216.271 - 14317.095: 66.1042% ( 183) 00:07:58.043 14317.095 - 14417.920: 67.9271% ( 175) 00:07:58.043 14417.920 - 14518.745: 69.6875% ( 169) 00:07:58.043 14518.745 - 14619.569: 71.3854% ( 163) 00:07:58.043 14619.569 - 14720.394: 73.0625% ( 161) 00:07:58.043 14720.394 - 14821.218: 74.4375% ( 132) 00:07:58.043 14821.218 - 14922.043: 75.9062% ( 141) 00:07:58.043 14922.043 - 15022.868: 77.4688% ( 150) 00:07:58.043 15022.868 - 15123.692: 78.6771% ( 116) 00:07:58.043 15123.692 - 15224.517: 79.9062% ( 118) 00:07:58.043 15224.517 - 15325.342: 81.1250% ( 117) 00:07:58.043 15325.342 - 15426.166: 82.3229% ( 115) 00:07:58.043 15426.166 - 15526.991: 83.6458% ( 127) 00:07:58.043 15526.991 - 15627.815: 84.7292% ( 104) 00:07:58.043 15627.815 - 15728.640: 85.6875% ( 92) 00:07:58.043 15728.640 - 15829.465: 86.4583% ( 74) 00:07:58.043 15829.465 - 15930.289: 87.2604% ( 77) 00:07:58.043 15930.289 - 16031.114: 88.1146% ( 82) 00:07:58.043 16031.114 - 16131.938: 89.2083% ( 105) 00:07:58.043 16131.938 - 16232.763: 90.1146% ( 87) 00:07:58.043 16232.763 - 16333.588: 91.0729% ( 92) 00:07:58.043 16333.588 - 16434.412: 91.7917% ( 69) 00:07:58.043 16434.412 - 16535.237: 92.4271% ( 61) 00:07:58.043 16535.237 - 16636.062: 93.0729% ( 62) 00:07:58.043 16636.062 - 16736.886: 93.5208% ( 43) 00:07:58.043 16736.886 - 16837.711: 93.9896% ( 45) 00:07:58.043 16837.711 - 16938.535: 94.4375% ( 43) 00:07:58.043 16938.535 - 17039.360: 94.8125% ( 36) 00:07:58.043 17039.360 - 17140.185: 95.0938% ( 27) 00:07:58.043 17140.185 - 17241.009: 95.3542% ( 25) 00:07:58.043 17241.009 - 17341.834: 95.5625% ( 20) 00:07:58.043 17341.834 - 17442.658: 95.7812% ( 21) 00:07:58.043 17442.658 - 17543.483: 95.9271% ( 14) 00:07:58.043 17543.483 - 17644.308: 96.0312% ( 10) 00:07:58.043 17644.308 - 17745.132: 96.1562% ( 12) 00:07:58.043 17745.132 - 17845.957: 96.2917% ( 13) 00:07:58.043 17845.957 - 17946.782: 96.4375% ( 14) 00:07:58.043 17946.782 - 18047.606: 96.5938% ( 15) 00:07:58.043 18047.606 - 18148.431: 96.6771% ( 8) 00:07:58.043 18148.431 - 18249.255: 96.7812% ( 10) 00:07:58.043 18249.255 - 18350.080: 96.8646% ( 8) 00:07:58.043 18350.080 - 18450.905: 96.9375% ( 7) 00:07:58.043 18450.905 - 18551.729: 97.0000% ( 6) 00:07:58.043 18551.729 - 18652.554: 97.0625% ( 6) 00:07:58.043 18652.554 - 18753.378: 97.1250% ( 6) 00:07:58.043 18753.378 - 18854.203: 97.1875% ( 6) 00:07:58.043 18854.203 - 18955.028: 97.2812% ( 9) 00:07:58.043 18955.028 - 19055.852: 97.3750% ( 9) 00:07:58.043 19055.852 - 19156.677: 97.4583% ( 8) 00:07:58.043 19156.677 - 19257.502: 97.5312% ( 7) 00:07:58.043 19257.502 - 19358.326: 97.5938% ( 6) 00:07:58.043 19358.326 - 19459.151: 97.6458% ( 5) 00:07:58.043 19459.151 - 19559.975: 97.7083% ( 6) 00:07:58.043 19559.975 - 19660.800: 97.7708% ( 6) 00:07:58.043 19660.800 - 19761.625: 97.8438% ( 7) 00:07:58.043 19761.625 - 19862.449: 97.8958% ( 5) 00:07:58.043 19862.449 - 19963.274: 97.9583% ( 6) 00:07:58.043 19963.274 - 20064.098: 98.0000% ( 4) 00:07:58.043 21173.169 - 21273.994: 98.0208% ( 2) 00:07:58.043 21273.994 - 21374.818: 98.0729% ( 5) 00:07:58.043 21374.818 - 21475.643: 98.1562% ( 8) 00:07:58.043 21475.643 - 21576.468: 98.2188% ( 6) 00:07:58.043 21576.468 - 21677.292: 98.2500% ( 3) 00:07:58.043 21677.292 - 21778.117: 98.3021% ( 5) 00:07:58.043 21778.117 - 21878.942: 98.3646% ( 6) 00:07:58.044 21878.942 - 21979.766: 98.4167% ( 5) 00:07:58.044 21979.766 - 22080.591: 98.4792% ( 6) 00:07:58.044 22080.591 - 22181.415: 98.5417% ( 6) 00:07:58.044 22181.415 - 22282.240: 98.5938% ( 5) 00:07:58.044 22282.240 - 22383.065: 98.6562% ( 6) 00:07:58.044 22383.065 - 22483.889: 98.6667% ( 1) 00:07:58.044 27222.646 - 27424.295: 98.7188% ( 5) 00:07:58.044 27424.295 - 27625.945: 98.8438% ( 12) 00:07:58.044 27625.945 - 27827.594: 98.9583% ( 11) 00:07:58.044 27827.594 - 28029.243: 99.0729% ( 11) 00:07:58.044 28029.243 - 28230.892: 99.1979% ( 12) 00:07:58.044 28230.892 - 28432.542: 99.3125% ( 11) 00:07:58.044 28432.542 - 28634.191: 99.3333% ( 2) 00:07:58.044 38716.652 - 38918.302: 99.3958% ( 6) 00:07:58.044 38918.302 - 39119.951: 99.5000% ( 10) 00:07:58.044 39119.951 - 39321.600: 99.6146% ( 11) 00:07:58.044 39321.600 - 39523.249: 99.7396% ( 12) 00:07:58.044 39523.249 - 39724.898: 99.8542% ( 11) 00:07:58.044 39724.898 - 39926.548: 99.9688% ( 11) 00:07:58.044 39926.548 - 40128.197: 100.0000% ( 3) 00:07:58.044 00:07:58.044 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.044 ============================================================================== 00:07:58.044 Range in us Cumulative IO count 00:07:58.044 9124.628 - 9175.040: 0.0104% ( 1) 00:07:58.044 9175.040 - 9225.452: 0.0521% ( 4) 00:07:58.044 9225.452 - 9275.865: 0.1042% ( 5) 00:07:58.044 9275.865 - 9326.277: 0.2396% ( 13) 00:07:58.044 9326.277 - 9376.689: 0.3542% ( 11) 00:07:58.044 9376.689 - 9427.102: 0.4583% ( 10) 00:07:58.044 9427.102 - 9477.514: 0.6146% ( 15) 00:07:58.044 9477.514 - 9527.926: 0.8333% ( 21) 00:07:58.044 9527.926 - 9578.338: 1.1250% ( 28) 00:07:58.044 9578.338 - 9628.751: 1.6667% ( 52) 00:07:58.044 9628.751 - 9679.163: 2.2188% ( 53) 00:07:58.044 9679.163 - 9729.575: 2.8958% ( 65) 00:07:58.044 9729.575 - 9779.988: 3.6354% ( 71) 00:07:58.044 9779.988 - 9830.400: 4.6042% ( 93) 00:07:58.044 9830.400 - 9880.812: 5.3958% ( 76) 00:07:58.044 9880.812 - 9931.225: 6.2396% ( 81) 00:07:58.044 9931.225 - 9981.637: 7.3229% ( 104) 00:07:58.044 9981.637 - 10032.049: 8.3750% ( 101) 00:07:58.044 10032.049 - 10082.462: 9.4688% ( 105) 00:07:58.044 10082.462 - 10132.874: 10.6562% ( 114) 00:07:58.044 10132.874 - 10183.286: 11.8438% ( 114) 00:07:58.044 10183.286 - 10233.698: 13.0729% ( 118) 00:07:58.044 10233.698 - 10284.111: 14.2292% ( 111) 00:07:58.044 10284.111 - 10334.523: 15.4271% ( 115) 00:07:58.044 10334.523 - 10384.935: 16.6042% ( 113) 00:07:58.044 10384.935 - 10435.348: 17.8854% ( 123) 00:07:58.044 10435.348 - 10485.760: 19.0312% ( 110) 00:07:58.044 10485.760 - 10536.172: 20.0000% ( 93) 00:07:58.044 10536.172 - 10586.585: 21.0938% ( 105) 00:07:58.044 10586.585 - 10636.997: 22.1146% ( 98) 00:07:58.044 10636.997 - 10687.409: 23.1458% ( 99) 00:07:58.044 10687.409 - 10737.822: 24.0000% ( 82) 00:07:58.044 10737.822 - 10788.234: 24.9167% ( 88) 00:07:58.044 10788.234 - 10838.646: 25.8646% ( 91) 00:07:58.044 10838.646 - 10889.058: 26.7604% ( 86) 00:07:58.044 10889.058 - 10939.471: 27.5625% ( 77) 00:07:58.044 10939.471 - 10989.883: 28.3125% ( 72) 00:07:58.044 10989.883 - 11040.295: 29.0417% ( 70) 00:07:58.044 11040.295 - 11090.708: 29.7292% ( 66) 00:07:58.044 11090.708 - 11141.120: 30.4167% ( 66) 00:07:58.044 11141.120 - 11191.532: 31.1250% ( 68) 00:07:58.044 11191.532 - 11241.945: 31.8229% ( 67) 00:07:58.044 11241.945 - 11292.357: 32.5521% ( 70) 00:07:58.044 11292.357 - 11342.769: 33.1979% ( 62) 00:07:58.044 11342.769 - 11393.182: 33.8542% ( 63) 00:07:58.044 11393.182 - 11443.594: 34.4896% ( 61) 00:07:58.044 11443.594 - 11494.006: 35.1875% ( 67) 00:07:58.044 11494.006 - 11544.418: 35.7812% ( 57) 00:07:58.044 11544.418 - 11594.831: 36.3438% ( 54) 00:07:58.044 11594.831 - 11645.243: 36.8333% ( 47) 00:07:58.044 11645.243 - 11695.655: 37.3646% ( 51) 00:07:58.044 11695.655 - 11746.068: 37.8854% ( 50) 00:07:58.044 11746.068 - 11796.480: 38.4062% ( 50) 00:07:58.044 11796.480 - 11846.892: 38.9271% ( 50) 00:07:58.044 11846.892 - 11897.305: 39.4167% ( 47) 00:07:58.044 11897.305 - 11947.717: 39.9062% ( 47) 00:07:58.044 11947.717 - 11998.129: 40.3646% ( 44) 00:07:58.044 11998.129 - 12048.542: 40.7917% ( 41) 00:07:58.044 12048.542 - 12098.954: 41.2292% ( 42) 00:07:58.044 12098.954 - 12149.366: 41.5625% ( 32) 00:07:58.044 12149.366 - 12199.778: 41.8958% ( 32) 00:07:58.044 12199.778 - 12250.191: 42.1771% ( 27) 00:07:58.044 12250.191 - 12300.603: 42.4792% ( 29) 00:07:58.044 12300.603 - 12351.015: 42.7396% ( 25) 00:07:58.044 12351.015 - 12401.428: 42.9792% ( 23) 00:07:58.044 12401.428 - 12451.840: 43.1875% ( 20) 00:07:58.044 12451.840 - 12502.252: 43.4479% ( 25) 00:07:58.044 12502.252 - 12552.665: 43.6979% ( 24) 00:07:58.044 12552.665 - 12603.077: 43.9792% ( 27) 00:07:58.044 12603.077 - 12653.489: 44.2917% ( 30) 00:07:58.044 12653.489 - 12703.902: 44.6146% ( 31) 00:07:58.044 12703.902 - 12754.314: 44.8438% ( 22) 00:07:58.044 12754.314 - 12804.726: 45.1146% ( 26) 00:07:58.044 12804.726 - 12855.138: 45.3646% ( 24) 00:07:58.044 12855.138 - 12905.551: 45.6562% ( 28) 00:07:58.044 12905.551 - 13006.375: 46.3438% ( 66) 00:07:58.044 13006.375 - 13107.200: 47.0104% ( 64) 00:07:58.044 13107.200 - 13208.025: 47.8750% ( 83) 00:07:58.044 13208.025 - 13308.849: 48.8854% ( 97) 00:07:58.044 13308.849 - 13409.674: 49.8542% ( 93) 00:07:58.044 13409.674 - 13510.498: 50.9062% ( 101) 00:07:58.044 13510.498 - 13611.323: 52.3229% ( 136) 00:07:58.044 13611.323 - 13712.148: 53.8438% ( 146) 00:07:58.044 13712.148 - 13812.972: 55.6354% ( 172) 00:07:58.044 13812.972 - 13913.797: 57.3229% ( 162) 00:07:58.044 13913.797 - 14014.622: 59.1771% ( 178) 00:07:58.044 14014.622 - 14115.446: 61.0208% ( 177) 00:07:58.044 14115.446 - 14216.271: 63.0208% ( 192) 00:07:58.044 14216.271 - 14317.095: 64.9792% ( 188) 00:07:58.044 14317.095 - 14417.920: 66.9271% ( 187) 00:07:58.044 14417.920 - 14518.745: 69.0938% ( 208) 00:07:58.044 14518.745 - 14619.569: 70.9062% ( 174) 00:07:58.044 14619.569 - 14720.394: 72.5521% ( 158) 00:07:58.044 14720.394 - 14821.218: 74.3854% ( 176) 00:07:58.044 14821.218 - 14922.043: 75.9792% ( 153) 00:07:58.044 14922.043 - 15022.868: 77.2083% ( 118) 00:07:58.044 15022.868 - 15123.692: 78.4062% ( 115) 00:07:58.044 15123.692 - 15224.517: 79.7292% ( 127) 00:07:58.044 15224.517 - 15325.342: 81.0521% ( 127) 00:07:58.044 15325.342 - 15426.166: 82.2604% ( 116) 00:07:58.044 15426.166 - 15526.991: 83.5833% ( 127) 00:07:58.044 15526.991 - 15627.815: 84.8021% ( 117) 00:07:58.044 15627.815 - 15728.640: 86.0312% ( 118) 00:07:58.044 15728.640 - 15829.465: 87.1458% ( 107) 00:07:58.044 15829.465 - 15930.289: 88.1042% ( 92) 00:07:58.044 15930.289 - 16031.114: 89.0625% ( 92) 00:07:58.044 16031.114 - 16131.938: 90.0000% ( 90) 00:07:58.044 16131.938 - 16232.763: 90.8125% ( 78) 00:07:58.044 16232.763 - 16333.588: 91.5104% ( 67) 00:07:58.044 16333.588 - 16434.412: 92.2708% ( 73) 00:07:58.044 16434.412 - 16535.237: 92.8750% ( 58) 00:07:58.044 16535.237 - 16636.062: 93.3854% ( 49) 00:07:58.044 16636.062 - 16736.886: 93.8333% ( 43) 00:07:58.044 16736.886 - 16837.711: 94.1458% ( 30) 00:07:58.044 16837.711 - 16938.535: 94.3854% ( 23) 00:07:58.044 16938.535 - 17039.360: 94.6250% ( 23) 00:07:58.044 17039.360 - 17140.185: 94.8438% ( 21) 00:07:58.044 17140.185 - 17241.009: 95.0417% ( 19) 00:07:58.044 17241.009 - 17341.834: 95.2292% ( 18) 00:07:58.044 17341.834 - 17442.658: 95.4583% ( 22) 00:07:58.044 17442.658 - 17543.483: 95.7188% ( 25) 00:07:58.044 17543.483 - 17644.308: 95.9688% ( 24) 00:07:58.044 17644.308 - 17745.132: 96.0833% ( 11) 00:07:58.044 17745.132 - 17845.957: 96.1667% ( 8) 00:07:58.044 17845.957 - 17946.782: 96.3333% ( 16) 00:07:58.044 17946.782 - 18047.606: 96.5208% ( 18) 00:07:58.044 18047.606 - 18148.431: 96.7188% ( 19) 00:07:58.044 18148.431 - 18249.255: 96.9062% ( 18) 00:07:58.044 18249.255 - 18350.080: 97.0729% ( 16) 00:07:58.044 18350.080 - 18450.905: 97.2083% ( 13) 00:07:58.044 18450.905 - 18551.729: 97.3333% ( 12) 00:07:58.044 18551.729 - 18652.554: 97.4583% ( 12) 00:07:58.044 18652.554 - 18753.378: 97.5729% ( 11) 00:07:58.044 18753.378 - 18854.203: 97.6875% ( 11) 00:07:58.044 18854.203 - 18955.028: 97.7917% ( 10) 00:07:58.044 18955.028 - 19055.852: 97.8229% ( 3) 00:07:58.044 19055.852 - 19156.677: 97.8646% ( 4) 00:07:58.044 19156.677 - 19257.502: 97.9062% ( 4) 00:07:58.044 19257.502 - 19358.326: 97.9375% ( 3) 00:07:58.044 19358.326 - 19459.151: 97.9688% ( 3) 00:07:58.044 19459.151 - 19559.975: 98.0000% ( 3) 00:07:58.044 21273.994 - 21374.818: 98.0312% ( 3) 00:07:58.044 21374.818 - 21475.643: 98.0833% ( 5) 00:07:58.044 21475.643 - 21576.468: 98.1354% ( 5) 00:07:58.044 21576.468 - 21677.292: 98.1979% ( 6) 00:07:58.044 21677.292 - 21778.117: 98.2396% ( 4) 00:07:58.044 21778.117 - 21878.942: 98.2917% ( 5) 00:07:58.044 21878.942 - 21979.766: 98.3438% ( 5) 00:07:58.044 21979.766 - 22080.591: 98.4062% ( 6) 00:07:58.044 22080.591 - 22181.415: 98.4583% ( 5) 00:07:58.044 22181.415 - 22282.240: 98.5208% ( 6) 00:07:58.044 22282.240 - 22383.065: 98.5833% ( 6) 00:07:58.044 22383.065 - 22483.889: 98.6354% ( 5) 00:07:58.044 22483.889 - 22584.714: 98.6667% ( 3) 00:07:58.044 27020.997 - 27222.646: 98.6979% ( 3) 00:07:58.044 27222.646 - 27424.295: 98.7917% ( 9) 00:07:58.044 27424.295 - 27625.945: 98.9062% ( 11) 00:07:58.044 27625.945 - 27827.594: 99.0312% ( 12) 00:07:58.044 27827.594 - 28029.243: 99.1458% ( 11) 00:07:58.044 28029.243 - 28230.892: 99.2604% ( 11) 00:07:58.044 28230.892 - 28432.542: 99.3333% ( 7) 00:07:58.045 38918.302 - 39119.951: 99.3438% ( 1) 00:07:58.045 39119.951 - 39321.600: 99.4062% ( 6) 00:07:58.045 39321.600 - 39523.249: 99.4688% ( 6) 00:07:58.045 39523.249 - 39724.898: 99.5417% ( 7) 00:07:58.045 39724.898 - 39926.548: 99.6458% ( 10) 00:07:58.045 39926.548 - 40128.197: 99.7292% ( 8) 00:07:58.045 40128.197 - 40329.846: 99.8542% ( 12) 00:07:58.045 40329.846 - 40531.495: 99.9688% ( 11) 00:07:58.045 40531.495 - 40733.145: 100.0000% ( 3) 00:07:58.045 00:07:58.045 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.045 ============================================================================== 00:07:58.045 Range in us Cumulative IO count 00:07:58.045 8620.505 - 8670.917: 0.0417% ( 4) 00:07:58.045 8670.917 - 8721.329: 0.0729% ( 3) 00:07:58.045 8721.329 - 8771.742: 0.1042% ( 3) 00:07:58.045 8771.742 - 8822.154: 0.1354% ( 3) 00:07:58.045 8822.154 - 8872.566: 0.1562% ( 2) 00:07:58.045 8872.566 - 8922.978: 0.1979% ( 4) 00:07:58.045 8922.978 - 8973.391: 0.2292% ( 3) 00:07:58.045 8973.391 - 9023.803: 0.2604% ( 3) 00:07:58.045 9023.803 - 9074.215: 0.2812% ( 2) 00:07:58.045 9074.215 - 9124.628: 0.3125% ( 3) 00:07:58.045 9124.628 - 9175.040: 0.3646% ( 5) 00:07:58.045 9175.040 - 9225.452: 0.4167% ( 5) 00:07:58.045 9225.452 - 9275.865: 0.5208% ( 10) 00:07:58.045 9275.865 - 9326.277: 0.6250% ( 10) 00:07:58.045 9326.277 - 9376.689: 0.7500% ( 12) 00:07:58.045 9376.689 - 9427.102: 1.0521% ( 29) 00:07:58.045 9427.102 - 9477.514: 1.2708% ( 21) 00:07:58.045 9477.514 - 9527.926: 1.5417% ( 26) 00:07:58.045 9527.926 - 9578.338: 1.9062% ( 35) 00:07:58.045 9578.338 - 9628.751: 2.2604% ( 34) 00:07:58.045 9628.751 - 9679.163: 2.5729% ( 30) 00:07:58.045 9679.163 - 9729.575: 3.2708% ( 67) 00:07:58.045 9729.575 - 9779.988: 3.9479% ( 65) 00:07:58.045 9779.988 - 9830.400: 4.6146% ( 64) 00:07:58.045 9830.400 - 9880.812: 5.3958% ( 75) 00:07:58.045 9880.812 - 9931.225: 6.2604% ( 83) 00:07:58.045 9931.225 - 9981.637: 7.1354% ( 84) 00:07:58.045 9981.637 - 10032.049: 8.1146% ( 94) 00:07:58.045 10032.049 - 10082.462: 9.1146% ( 96) 00:07:58.045 10082.462 - 10132.874: 10.0729% ( 92) 00:07:58.045 10132.874 - 10183.286: 11.2083% ( 109) 00:07:58.045 10183.286 - 10233.698: 12.2708% ( 102) 00:07:58.045 10233.698 - 10284.111: 13.4062% ( 109) 00:07:58.045 10284.111 - 10334.523: 14.7708% ( 131) 00:07:58.045 10334.523 - 10384.935: 15.8958% ( 108) 00:07:58.045 10384.935 - 10435.348: 17.2188% ( 127) 00:07:58.045 10435.348 - 10485.760: 18.3333% ( 107) 00:07:58.045 10485.760 - 10536.172: 19.5104% ( 113) 00:07:58.045 10536.172 - 10586.585: 20.5417% ( 99) 00:07:58.045 10586.585 - 10636.997: 21.7604% ( 117) 00:07:58.045 10636.997 - 10687.409: 22.8542% ( 105) 00:07:58.045 10687.409 - 10737.822: 23.9688% ( 107) 00:07:58.045 10737.822 - 10788.234: 24.9167% ( 91) 00:07:58.045 10788.234 - 10838.646: 25.9167% ( 96) 00:07:58.045 10838.646 - 10889.058: 26.6875% ( 74) 00:07:58.045 10889.058 - 10939.471: 27.4792% ( 76) 00:07:58.045 10939.471 - 10989.883: 28.2500% ( 74) 00:07:58.045 10989.883 - 11040.295: 29.0625% ( 78) 00:07:58.045 11040.295 - 11090.708: 29.9062% ( 81) 00:07:58.045 11090.708 - 11141.120: 30.5312% ( 60) 00:07:58.045 11141.120 - 11191.532: 31.3021% ( 74) 00:07:58.045 11191.532 - 11241.945: 32.0104% ( 68) 00:07:58.045 11241.945 - 11292.357: 32.7604% ( 72) 00:07:58.045 11292.357 - 11342.769: 33.4479% ( 66) 00:07:58.045 11342.769 - 11393.182: 34.1250% ( 65) 00:07:58.045 11393.182 - 11443.594: 34.6042% ( 46) 00:07:58.045 11443.594 - 11494.006: 35.1562% ( 53) 00:07:58.045 11494.006 - 11544.418: 35.6875% ( 51) 00:07:58.045 11544.418 - 11594.831: 36.2917% ( 58) 00:07:58.045 11594.831 - 11645.243: 36.9271% ( 61) 00:07:58.045 11645.243 - 11695.655: 37.5104% ( 56) 00:07:58.045 11695.655 - 11746.068: 38.0938% ( 56) 00:07:58.045 11746.068 - 11796.480: 38.4583% ( 35) 00:07:58.045 11796.480 - 11846.892: 38.9479% ( 47) 00:07:58.045 11846.892 - 11897.305: 39.4792% ( 51) 00:07:58.045 11897.305 - 11947.717: 39.9062% ( 41) 00:07:58.045 11947.717 - 11998.129: 40.3229% ( 40) 00:07:58.045 11998.129 - 12048.542: 40.7500% ( 41) 00:07:58.045 12048.542 - 12098.954: 41.2083% ( 44) 00:07:58.045 12098.954 - 12149.366: 41.6667% ( 44) 00:07:58.045 12149.366 - 12199.778: 42.0521% ( 37) 00:07:58.045 12199.778 - 12250.191: 42.3854% ( 32) 00:07:58.045 12250.191 - 12300.603: 42.7396% ( 34) 00:07:58.045 12300.603 - 12351.015: 43.0833% ( 33) 00:07:58.045 12351.015 - 12401.428: 43.4167% ( 32) 00:07:58.045 12401.428 - 12451.840: 43.5312% ( 11) 00:07:58.045 12451.840 - 12502.252: 43.6667% ( 13) 00:07:58.045 12502.252 - 12552.665: 44.0833% ( 40) 00:07:58.045 12552.665 - 12603.077: 44.1771% ( 9) 00:07:58.045 12603.077 - 12653.489: 44.6146% ( 42) 00:07:58.045 12653.489 - 12703.902: 44.9896% ( 36) 00:07:58.045 12703.902 - 12754.314: 45.2396% ( 24) 00:07:58.045 12754.314 - 12804.726: 45.5417% ( 29) 00:07:58.045 12804.726 - 12855.138: 45.9583% ( 40) 00:07:58.045 12855.138 - 12905.551: 46.3438% ( 37) 00:07:58.045 12905.551 - 13006.375: 47.0208% ( 65) 00:07:58.045 13006.375 - 13107.200: 47.6979% ( 65) 00:07:58.045 13107.200 - 13208.025: 48.5104% ( 78) 00:07:58.045 13208.025 - 13308.849: 49.5833% ( 103) 00:07:58.045 13308.849 - 13409.674: 50.8021% ( 117) 00:07:58.045 13409.674 - 13510.498: 52.0938% ( 124) 00:07:58.045 13510.498 - 13611.323: 53.5521% ( 140) 00:07:58.045 13611.323 - 13712.148: 55.1354% ( 152) 00:07:58.045 13712.148 - 13812.972: 56.6562% ( 146) 00:07:58.045 13812.972 - 13913.797: 58.1771% ( 146) 00:07:58.045 13913.797 - 14014.622: 59.8333% ( 159) 00:07:58.045 14014.622 - 14115.446: 61.5729% ( 167) 00:07:58.045 14115.446 - 14216.271: 63.4583% ( 181) 00:07:58.045 14216.271 - 14317.095: 65.2812% ( 175) 00:07:58.045 14317.095 - 14417.920: 66.8646% ( 152) 00:07:58.045 14417.920 - 14518.745: 68.6562% ( 172) 00:07:58.045 14518.745 - 14619.569: 70.5104% ( 178) 00:07:58.045 14619.569 - 14720.394: 72.1146% ( 154) 00:07:58.045 14720.394 - 14821.218: 73.7604% ( 158) 00:07:58.045 14821.218 - 14922.043: 75.4896% ( 166) 00:07:58.045 14922.043 - 15022.868: 76.9062% ( 136) 00:07:58.045 15022.868 - 15123.692: 78.3438% ( 138) 00:07:58.045 15123.692 - 15224.517: 79.7188% ( 132) 00:07:58.045 15224.517 - 15325.342: 80.9792% ( 121) 00:07:58.045 15325.342 - 15426.166: 82.2604% ( 123) 00:07:58.045 15426.166 - 15526.991: 83.6250% ( 131) 00:07:58.045 15526.991 - 15627.815: 84.8229% ( 115) 00:07:58.045 15627.815 - 15728.640: 86.0521% ( 118) 00:07:58.045 15728.640 - 15829.465: 87.1042% ( 101) 00:07:58.045 15829.465 - 15930.289: 88.1979% ( 105) 00:07:58.045 15930.289 - 16031.114: 89.0625% ( 83) 00:07:58.045 16031.114 - 16131.938: 89.9271% ( 83) 00:07:58.045 16131.938 - 16232.763: 90.5729% ( 62) 00:07:58.045 16232.763 - 16333.588: 91.0938% ( 50) 00:07:58.045 16333.588 - 16434.412: 91.6458% ( 53) 00:07:58.045 16434.412 - 16535.237: 92.2604% ( 59) 00:07:58.045 16535.237 - 16636.062: 92.7812% ( 50) 00:07:58.045 16636.062 - 16736.886: 93.3854% ( 58) 00:07:58.045 16736.886 - 16837.711: 93.6979% ( 30) 00:07:58.045 16837.711 - 16938.535: 93.9896% ( 28) 00:07:58.045 16938.535 - 17039.360: 94.3542% ( 35) 00:07:58.045 17039.360 - 17140.185: 94.6667% ( 30) 00:07:58.045 17140.185 - 17241.009: 94.9167% ( 24) 00:07:58.045 17241.009 - 17341.834: 95.1875% ( 26) 00:07:58.045 17341.834 - 17442.658: 95.4375% ( 24) 00:07:58.045 17442.658 - 17543.483: 95.8438% ( 39) 00:07:58.045 17543.483 - 17644.308: 96.0729% ( 22) 00:07:58.045 17644.308 - 17745.132: 96.3333% ( 25) 00:07:58.045 17745.132 - 17845.957: 96.6354% ( 29) 00:07:58.045 17845.957 - 17946.782: 96.8125% ( 17) 00:07:58.045 17946.782 - 18047.606: 97.0521% ( 23) 00:07:58.045 18047.606 - 18148.431: 97.2292% ( 17) 00:07:58.045 18148.431 - 18249.255: 97.3229% ( 9) 00:07:58.045 18249.255 - 18350.080: 97.5208% ( 19) 00:07:58.045 18350.080 - 18450.905: 97.6562% ( 13) 00:07:58.045 18450.905 - 18551.729: 97.7604% ( 10) 00:07:58.045 18551.729 - 18652.554: 97.8438% ( 8) 00:07:58.045 18652.554 - 18753.378: 97.9479% ( 10) 00:07:58.045 18753.378 - 18854.203: 97.9896% ( 4) 00:07:58.045 18854.203 - 18955.028: 98.0000% ( 1) 00:07:58.045 21173.169 - 21273.994: 98.0417% ( 4) 00:07:58.045 21273.994 - 21374.818: 98.0625% ( 2) 00:07:58.045 21374.818 - 21475.643: 98.1354% ( 7) 00:07:58.045 21475.643 - 21576.468: 98.1771% ( 4) 00:07:58.045 21576.468 - 21677.292: 98.2083% ( 3) 00:07:58.045 21677.292 - 21778.117: 98.2708% ( 6) 00:07:58.045 21778.117 - 21878.942: 98.3125% ( 4) 00:07:58.045 21878.942 - 21979.766: 98.3646% ( 5) 00:07:58.045 21979.766 - 22080.591: 98.4271% ( 6) 00:07:58.045 22080.591 - 22181.415: 98.4792% ( 5) 00:07:58.045 22181.415 - 22282.240: 98.5312% ( 5) 00:07:58.045 22282.240 - 22383.065: 98.5833% ( 5) 00:07:58.045 22383.065 - 22483.889: 98.6250% ( 4) 00:07:58.045 22483.889 - 22584.714: 98.6667% ( 4) 00:07:58.045 26819.348 - 27020.997: 98.7292% ( 6) 00:07:58.045 27020.997 - 27222.646: 98.8229% ( 9) 00:07:58.045 27222.646 - 27424.295: 98.9167% ( 9) 00:07:58.045 27424.295 - 27625.945: 99.0312% ( 11) 00:07:58.045 27625.945 - 27827.594: 99.1354% ( 10) 00:07:58.045 27827.594 - 28029.243: 99.2292% ( 9) 00:07:58.045 28029.243 - 28230.892: 99.3229% ( 9) 00:07:58.045 28230.892 - 28432.542: 99.3333% ( 1) 00:07:58.045 39523.249 - 39724.898: 99.3646% ( 3) 00:07:58.045 39724.898 - 39926.548: 99.4583% ( 9) 00:07:58.045 39926.548 - 40128.197: 99.5312% ( 7) 00:07:58.045 40128.197 - 40329.846: 99.6250% ( 9) 00:07:58.045 40329.846 - 40531.495: 99.6979% ( 7) 00:07:58.046 40531.495 - 40733.145: 99.8021% ( 10) 00:07:58.046 40733.145 - 40934.794: 99.8750% ( 7) 00:07:58.046 40934.794 - 41136.443: 99.9583% ( 8) 00:07:58.046 41136.443 - 41338.092: 100.0000% ( 4) 00:07:58.046 00:07:58.046 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.046 ============================================================================== 00:07:58.046 Range in us Cumulative IO count 00:07:58.046 8418.855 - 8469.268: 0.0417% ( 4) 00:07:58.046 8469.268 - 8519.680: 0.0729% ( 3) 00:07:58.046 8519.680 - 8570.092: 0.1042% ( 3) 00:07:58.046 8570.092 - 8620.505: 0.1458% ( 4) 00:07:58.046 8620.505 - 8670.917: 0.1771% ( 3) 00:07:58.046 8670.917 - 8721.329: 0.1979% ( 2) 00:07:58.046 8721.329 - 8771.742: 0.2292% ( 3) 00:07:58.046 8771.742 - 8822.154: 0.2708% ( 4) 00:07:58.046 8822.154 - 8872.566: 0.2917% ( 2) 00:07:58.046 8872.566 - 8922.978: 0.3333% ( 4) 00:07:58.046 8922.978 - 8973.391: 0.3646% ( 3) 00:07:58.046 8973.391 - 9023.803: 0.3958% ( 3) 00:07:58.046 9023.803 - 9074.215: 0.4583% ( 6) 00:07:58.046 9074.215 - 9124.628: 0.5208% ( 6) 00:07:58.046 9124.628 - 9175.040: 0.6042% ( 8) 00:07:58.046 9175.040 - 9225.452: 0.6771% ( 7) 00:07:58.046 9225.452 - 9275.865: 0.7500% ( 7) 00:07:58.046 9275.865 - 9326.277: 0.8333% ( 8) 00:07:58.046 9326.277 - 9376.689: 0.9896% ( 15) 00:07:58.046 9376.689 - 9427.102: 1.1667% ( 17) 00:07:58.046 9427.102 - 9477.514: 1.3438% ( 17) 00:07:58.046 9477.514 - 9527.926: 1.6250% ( 27) 00:07:58.046 9527.926 - 9578.338: 1.9062% ( 27) 00:07:58.046 9578.338 - 9628.751: 2.2500% ( 33) 00:07:58.046 9628.751 - 9679.163: 2.6562% ( 39) 00:07:58.046 9679.163 - 9729.575: 3.2083% ( 53) 00:07:58.046 9729.575 - 9779.988: 3.8125% ( 58) 00:07:58.046 9779.988 - 9830.400: 4.3854% ( 55) 00:07:58.046 9830.400 - 9880.812: 5.0833% ( 67) 00:07:58.046 9880.812 - 9931.225: 5.8646% ( 75) 00:07:58.046 9931.225 - 9981.637: 6.7083% ( 81) 00:07:58.046 9981.637 - 10032.049: 7.8438% ( 109) 00:07:58.046 10032.049 - 10082.462: 9.1667% ( 127) 00:07:58.046 10082.462 - 10132.874: 10.2708% ( 106) 00:07:58.046 10132.874 - 10183.286: 11.4271% ( 111) 00:07:58.046 10183.286 - 10233.698: 12.5104% ( 104) 00:07:58.046 10233.698 - 10284.111: 13.7396% ( 118) 00:07:58.046 10284.111 - 10334.523: 15.0729% ( 128) 00:07:58.046 10334.523 - 10384.935: 16.3021% ( 118) 00:07:58.046 10384.935 - 10435.348: 17.6771% ( 132) 00:07:58.046 10435.348 - 10485.760: 18.9688% ( 124) 00:07:58.046 10485.760 - 10536.172: 20.3438% ( 132) 00:07:58.046 10536.172 - 10586.585: 21.6875% ( 129) 00:07:58.046 10586.585 - 10636.997: 22.8125% ( 108) 00:07:58.046 10636.997 - 10687.409: 23.8646% ( 101) 00:07:58.046 10687.409 - 10737.822: 24.8958% ( 99) 00:07:58.046 10737.822 - 10788.234: 25.9062% ( 97) 00:07:58.046 10788.234 - 10838.646: 26.7917% ( 85) 00:07:58.046 10838.646 - 10889.058: 27.5938% ( 77) 00:07:58.046 10889.058 - 10939.471: 28.3438% ( 72) 00:07:58.046 10939.471 - 10989.883: 29.0208% ( 65) 00:07:58.046 10989.883 - 11040.295: 29.7917% ( 74) 00:07:58.046 11040.295 - 11090.708: 30.5729% ( 75) 00:07:58.046 11090.708 - 11141.120: 31.1458% ( 55) 00:07:58.046 11141.120 - 11191.532: 31.6771% ( 51) 00:07:58.046 11191.532 - 11241.945: 32.2604% ( 56) 00:07:58.046 11241.945 - 11292.357: 32.8333% ( 55) 00:07:58.046 11292.357 - 11342.769: 33.4271% ( 57) 00:07:58.046 11342.769 - 11393.182: 33.9479% ( 50) 00:07:58.046 11393.182 - 11443.594: 34.4688% ( 50) 00:07:58.046 11443.594 - 11494.006: 34.9896% ( 50) 00:07:58.046 11494.006 - 11544.418: 35.4896% ( 48) 00:07:58.046 11544.418 - 11594.831: 36.0208% ( 51) 00:07:58.046 11594.831 - 11645.243: 36.5833% ( 54) 00:07:58.046 11645.243 - 11695.655: 37.1771% ( 57) 00:07:58.046 11695.655 - 11746.068: 37.8021% ( 60) 00:07:58.046 11746.068 - 11796.480: 38.3750% ( 55) 00:07:58.046 11796.480 - 11846.892: 38.8333% ( 44) 00:07:58.046 11846.892 - 11897.305: 39.3229% ( 47) 00:07:58.046 11897.305 - 11947.717: 39.7812% ( 44) 00:07:58.046 11947.717 - 11998.129: 40.1875% ( 39) 00:07:58.046 11998.129 - 12048.542: 40.6562% ( 45) 00:07:58.046 12048.542 - 12098.954: 41.0521% ( 38) 00:07:58.046 12098.954 - 12149.366: 41.4375% ( 37) 00:07:58.046 12149.366 - 12199.778: 41.7188% ( 27) 00:07:58.046 12199.778 - 12250.191: 42.0208% ( 29) 00:07:58.046 12250.191 - 12300.603: 42.2812% ( 25) 00:07:58.046 12300.603 - 12351.015: 42.5208% ( 23) 00:07:58.046 12351.015 - 12401.428: 42.7083% ( 18) 00:07:58.046 12401.428 - 12451.840: 42.9583% ( 24) 00:07:58.046 12451.840 - 12502.252: 43.2500% ( 28) 00:07:58.046 12502.252 - 12552.665: 43.5521% ( 29) 00:07:58.046 12552.665 - 12603.077: 43.9896% ( 42) 00:07:58.046 12603.077 - 12653.489: 44.2708% ( 27) 00:07:58.046 12653.489 - 12703.902: 44.5625% ( 28) 00:07:58.046 12703.902 - 12754.314: 44.9479% ( 37) 00:07:58.046 12754.314 - 12804.726: 45.3021% ( 34) 00:07:58.046 12804.726 - 12855.138: 45.6458% ( 33) 00:07:58.046 12855.138 - 12905.551: 46.0208% ( 36) 00:07:58.046 12905.551 - 13006.375: 46.9583% ( 90) 00:07:58.046 13006.375 - 13107.200: 47.9792% ( 98) 00:07:58.046 13107.200 - 13208.025: 48.9167% ( 90) 00:07:58.046 13208.025 - 13308.849: 49.8958% ( 94) 00:07:58.046 13308.849 - 13409.674: 51.0521% ( 111) 00:07:58.046 13409.674 - 13510.498: 52.4583% ( 135) 00:07:58.046 13510.498 - 13611.323: 53.8854% ( 137) 00:07:58.046 13611.323 - 13712.148: 55.3125% ( 137) 00:07:58.046 13712.148 - 13812.972: 56.8125% ( 144) 00:07:58.046 13812.972 - 13913.797: 58.4688% ( 159) 00:07:58.046 13913.797 - 14014.622: 59.9688% ( 144) 00:07:58.046 14014.622 - 14115.446: 61.7396% ( 170) 00:07:58.046 14115.446 - 14216.271: 63.6667% ( 185) 00:07:58.046 14216.271 - 14317.095: 65.5104% ( 177) 00:07:58.046 14317.095 - 14417.920: 67.2396% ( 166) 00:07:58.046 14417.920 - 14518.745: 68.9688% ( 166) 00:07:58.046 14518.745 - 14619.569: 70.7083% ( 167) 00:07:58.046 14619.569 - 14720.394: 72.3542% ( 158) 00:07:58.046 14720.394 - 14821.218: 73.8750% ( 146) 00:07:58.046 14821.218 - 14922.043: 75.3854% ( 145) 00:07:58.046 14922.043 - 15022.868: 76.8854% ( 144) 00:07:58.046 15022.868 - 15123.692: 78.4583% ( 151) 00:07:58.046 15123.692 - 15224.517: 80.0625% ( 154) 00:07:58.046 15224.517 - 15325.342: 81.3750% ( 126) 00:07:58.046 15325.342 - 15426.166: 82.4688% ( 105) 00:07:58.046 15426.166 - 15526.991: 83.7083% ( 119) 00:07:58.046 15526.991 - 15627.815: 84.9688% ( 121) 00:07:58.046 15627.815 - 15728.640: 86.0938% ( 108) 00:07:58.046 15728.640 - 15829.465: 87.1667% ( 103) 00:07:58.046 15829.465 - 15930.289: 88.1979% ( 99) 00:07:58.046 15930.289 - 16031.114: 89.1146% ( 88) 00:07:58.046 16031.114 - 16131.938: 89.8333% ( 69) 00:07:58.046 16131.938 - 16232.763: 90.5104% ( 65) 00:07:58.046 16232.763 - 16333.588: 91.1146% ( 58) 00:07:58.046 16333.588 - 16434.412: 91.6250% ( 49) 00:07:58.046 16434.412 - 16535.237: 92.0729% ( 43) 00:07:58.046 16535.237 - 16636.062: 92.5729% ( 48) 00:07:58.046 16636.062 - 16736.886: 93.0000% ( 41) 00:07:58.046 16736.886 - 16837.711: 93.4896% ( 47) 00:07:58.046 16837.711 - 16938.535: 94.0000% ( 49) 00:07:58.046 16938.535 - 17039.360: 94.4479% ( 43) 00:07:58.046 17039.360 - 17140.185: 94.7917% ( 33) 00:07:58.046 17140.185 - 17241.009: 95.1042% ( 30) 00:07:58.046 17241.009 - 17341.834: 95.4167% ( 30) 00:07:58.046 17341.834 - 17442.658: 95.7292% ( 30) 00:07:58.046 17442.658 - 17543.483: 96.0417% ( 30) 00:07:58.046 17543.483 - 17644.308: 96.3438% ( 29) 00:07:58.046 17644.308 - 17745.132: 96.6146% ( 26) 00:07:58.046 17745.132 - 17845.957: 96.8333% ( 21) 00:07:58.046 17845.957 - 17946.782: 96.9792% ( 14) 00:07:58.046 17946.782 - 18047.606: 97.1667% ( 18) 00:07:58.046 18047.606 - 18148.431: 97.3333% ( 16) 00:07:58.046 18148.431 - 18249.255: 97.4583% ( 12) 00:07:58.046 18249.255 - 18350.080: 97.5833% ( 12) 00:07:58.046 18350.080 - 18450.905: 97.7188% ( 13) 00:07:58.046 18450.905 - 18551.729: 97.8229% ( 10) 00:07:58.046 18551.729 - 18652.554: 97.8854% ( 6) 00:07:58.046 18652.554 - 18753.378: 97.9479% ( 6) 00:07:58.046 18753.378 - 18854.203: 97.9896% ( 4) 00:07:58.046 18854.203 - 18955.028: 98.0000% ( 1) 00:07:58.046 20669.046 - 20769.871: 98.0417% ( 4) 00:07:58.046 20769.871 - 20870.695: 98.0833% ( 4) 00:07:58.046 20870.695 - 20971.520: 98.1458% ( 6) 00:07:58.046 20971.520 - 21072.345: 98.2188% ( 7) 00:07:58.046 21072.345 - 21173.169: 98.2917% ( 7) 00:07:58.046 21173.169 - 21273.994: 98.3438% ( 5) 00:07:58.046 21273.994 - 21374.818: 98.4167% ( 7) 00:07:58.046 21374.818 - 21475.643: 98.4792% ( 6) 00:07:58.046 21475.643 - 21576.468: 98.5417% ( 6) 00:07:58.046 21576.468 - 21677.292: 98.6146% ( 7) 00:07:58.046 21677.292 - 21778.117: 98.6562% ( 4) 00:07:58.046 21778.117 - 21878.942: 98.6667% ( 1) 00:07:58.046 27222.646 - 27424.295: 98.7292% ( 6) 00:07:58.046 27424.295 - 27625.945: 98.8438% ( 11) 00:07:58.046 27625.945 - 27827.594: 98.9583% ( 11) 00:07:58.046 27827.594 - 28029.243: 99.0729% ( 11) 00:07:58.046 28029.243 - 28230.892: 99.1875% ( 11) 00:07:58.046 28230.892 - 28432.542: 99.3021% ( 11) 00:07:58.046 28432.542 - 28634.191: 99.3333% ( 3) 00:07:58.046 39926.548 - 40128.197: 99.3750% ( 4) 00:07:58.046 40128.197 - 40329.846: 99.4792% ( 10) 00:07:58.046 40329.846 - 40531.495: 99.5729% ( 9) 00:07:58.046 40531.495 - 40733.145: 99.6562% ( 8) 00:07:58.046 40733.145 - 40934.794: 99.7396% ( 8) 00:07:58.046 40934.794 - 41136.443: 99.8438% ( 10) 00:07:58.046 41136.443 - 41338.092: 99.9479% ( 10) 00:07:58.046 41338.092 - 41539.742: 100.0000% ( 5) 00:07:58.046 00:07:58.047 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.047 ============================================================================== 00:07:58.047 Range in us Cumulative IO count 00:07:58.047 6956.898 - 7007.311: 0.0104% ( 1) 00:07:58.047 7007.311 - 7057.723: 0.0312% ( 2) 00:07:58.047 7057.723 - 7108.135: 0.0417% ( 1) 00:07:58.047 7108.135 - 7158.548: 0.0521% ( 1) 00:07:58.047 7158.548 - 7208.960: 0.0938% ( 4) 00:07:58.047 7208.960 - 7259.372: 0.1146% ( 2) 00:07:58.047 7259.372 - 7309.785: 0.1354% ( 2) 00:07:58.047 7309.785 - 7360.197: 0.1562% ( 2) 00:07:58.047 7360.197 - 7410.609: 0.1875% ( 3) 00:07:58.047 7410.609 - 7461.022: 0.2083% ( 2) 00:07:58.047 7461.022 - 7511.434: 0.2292% ( 2) 00:07:58.047 7511.434 - 7561.846: 0.2500% ( 2) 00:07:58.047 7561.846 - 7612.258: 0.2708% ( 2) 00:07:58.047 7612.258 - 7662.671: 0.2917% ( 2) 00:07:58.047 7662.671 - 7713.083: 0.3229% ( 3) 00:07:58.047 7713.083 - 7763.495: 0.3438% ( 2) 00:07:58.047 7763.495 - 7813.908: 0.3646% ( 2) 00:07:58.047 7813.908 - 7864.320: 0.3854% ( 2) 00:07:58.047 7864.320 - 7914.732: 0.4167% ( 3) 00:07:58.047 7914.732 - 7965.145: 0.4375% ( 2) 00:07:58.047 7965.145 - 8015.557: 0.4583% ( 2) 00:07:58.047 8015.557 - 8065.969: 0.4896% ( 3) 00:07:58.047 8065.969 - 8116.382: 0.5104% ( 2) 00:07:58.047 8116.382 - 8166.794: 0.5312% ( 2) 00:07:58.047 8166.794 - 8217.206: 0.5521% ( 2) 00:07:58.047 8217.206 - 8267.618: 0.5833% ( 3) 00:07:58.047 8267.618 - 8318.031: 0.6042% ( 2) 00:07:58.047 8318.031 - 8368.443: 0.6250% ( 2) 00:07:58.047 8368.443 - 8418.855: 0.6458% ( 2) 00:07:58.047 8418.855 - 8469.268: 0.6667% ( 2) 00:07:58.047 9124.628 - 9175.040: 0.6875% ( 2) 00:07:58.047 9175.040 - 9225.452: 0.7396% ( 5) 00:07:58.047 9225.452 - 9275.865: 0.7812% ( 4) 00:07:58.047 9275.865 - 9326.277: 0.8438% ( 6) 00:07:58.047 9326.277 - 9376.689: 1.1667% ( 31) 00:07:58.047 9376.689 - 9427.102: 1.4167% ( 24) 00:07:58.047 9427.102 - 9477.514: 1.6667% ( 24) 00:07:58.047 9477.514 - 9527.926: 1.9271% ( 25) 00:07:58.047 9527.926 - 9578.338: 2.2188% ( 28) 00:07:58.047 9578.338 - 9628.751: 2.6250% ( 39) 00:07:58.047 9628.751 - 9679.163: 3.1250% ( 48) 00:07:58.047 9679.163 - 9729.575: 3.7292% ( 58) 00:07:58.047 9729.575 - 9779.988: 4.3125% ( 56) 00:07:58.047 9779.988 - 9830.400: 5.0625% ( 72) 00:07:58.047 9830.400 - 9880.812: 5.8750% ( 78) 00:07:58.047 9880.812 - 9931.225: 6.6250% ( 72) 00:07:58.047 9931.225 - 9981.637: 7.4062% ( 75) 00:07:58.047 9981.637 - 10032.049: 8.3542% ( 91) 00:07:58.047 10032.049 - 10082.462: 9.3750% ( 98) 00:07:58.047 10082.462 - 10132.874: 10.3646% ( 95) 00:07:58.047 10132.874 - 10183.286: 11.6562% ( 124) 00:07:58.047 10183.286 - 10233.698: 12.8229% ( 112) 00:07:58.047 10233.698 - 10284.111: 13.9271% ( 106) 00:07:58.047 10284.111 - 10334.523: 15.0417% ( 107) 00:07:58.047 10334.523 - 10384.935: 16.1354% ( 105) 00:07:58.047 10384.935 - 10435.348: 17.2500% ( 107) 00:07:58.047 10435.348 - 10485.760: 18.5833% ( 128) 00:07:58.047 10485.760 - 10536.172: 19.7604% ( 113) 00:07:58.047 10536.172 - 10586.585: 20.7500% ( 95) 00:07:58.047 10586.585 - 10636.997: 21.9375% ( 114) 00:07:58.047 10636.997 - 10687.409: 23.1042% ( 112) 00:07:58.047 10687.409 - 10737.822: 24.1667% ( 102) 00:07:58.047 10737.822 - 10788.234: 25.1042% ( 90) 00:07:58.047 10788.234 - 10838.646: 26.0312% ( 89) 00:07:58.047 10838.646 - 10889.058: 26.7917% ( 73) 00:07:58.047 10889.058 - 10939.471: 27.5521% ( 73) 00:07:58.047 10939.471 - 10989.883: 28.3333% ( 75) 00:07:58.047 10989.883 - 11040.295: 29.2604% ( 89) 00:07:58.047 11040.295 - 11090.708: 30.1458% ( 85) 00:07:58.047 11090.708 - 11141.120: 30.9479% ( 77) 00:07:58.047 11141.120 - 11191.532: 31.7292% ( 75) 00:07:58.047 11191.532 - 11241.945: 32.3958% ( 64) 00:07:58.047 11241.945 - 11292.357: 32.9375% ( 52) 00:07:58.047 11292.357 - 11342.769: 33.6146% ( 65) 00:07:58.047 11342.769 - 11393.182: 34.1979% ( 56) 00:07:58.047 11393.182 - 11443.594: 34.7500% ( 53) 00:07:58.047 11443.594 - 11494.006: 35.3438% ( 57) 00:07:58.047 11494.006 - 11544.418: 35.8333% ( 47) 00:07:58.047 11544.418 - 11594.831: 36.3229% ( 47) 00:07:58.047 11594.831 - 11645.243: 36.7396% ( 40) 00:07:58.047 11645.243 - 11695.655: 37.2396% ( 48) 00:07:58.047 11695.655 - 11746.068: 37.6771% ( 42) 00:07:58.047 11746.068 - 11796.480: 38.0833% ( 39) 00:07:58.047 11796.480 - 11846.892: 38.4583% ( 36) 00:07:58.047 11846.892 - 11897.305: 38.8542% ( 38) 00:07:58.047 11897.305 - 11947.717: 39.2708% ( 40) 00:07:58.047 11947.717 - 11998.129: 39.7500% ( 46) 00:07:58.047 11998.129 - 12048.542: 40.1875% ( 42) 00:07:58.047 12048.542 - 12098.954: 40.6250% ( 42) 00:07:58.047 12098.954 - 12149.366: 41.0417% ( 40) 00:07:58.047 12149.366 - 12199.778: 41.3750% ( 32) 00:07:58.047 12199.778 - 12250.191: 41.7188% ( 33) 00:07:58.047 12250.191 - 12300.603: 42.0417% ( 31) 00:07:58.047 12300.603 - 12351.015: 42.3125% ( 26) 00:07:58.047 12351.015 - 12401.428: 42.5938% ( 27) 00:07:58.047 12401.428 - 12451.840: 42.9688% ( 36) 00:07:58.047 12451.840 - 12502.252: 43.4167% ( 43) 00:07:58.047 12502.252 - 12552.665: 43.8438% ( 41) 00:07:58.047 12552.665 - 12603.077: 44.2396% ( 38) 00:07:58.047 12603.077 - 12653.489: 44.6771% ( 42) 00:07:58.047 12653.489 - 12703.902: 45.0833% ( 39) 00:07:58.047 12703.902 - 12754.314: 45.4688% ( 37) 00:07:58.047 12754.314 - 12804.726: 45.8021% ( 32) 00:07:58.047 12804.726 - 12855.138: 46.1979% ( 38) 00:07:58.047 12855.138 - 12905.551: 46.5521% ( 34) 00:07:58.047 12905.551 - 13006.375: 47.2917% ( 71) 00:07:58.047 13006.375 - 13107.200: 48.0833% ( 76) 00:07:58.047 13107.200 - 13208.025: 49.0208% ( 90) 00:07:58.047 13208.025 - 13308.849: 50.0312% ( 97) 00:07:58.047 13308.849 - 13409.674: 51.1562% ( 108) 00:07:58.047 13409.674 - 13510.498: 52.4062% ( 120) 00:07:58.047 13510.498 - 13611.323: 53.8646% ( 140) 00:07:58.047 13611.323 - 13712.148: 55.7083% ( 177) 00:07:58.047 13712.148 - 13812.972: 57.4792% ( 170) 00:07:58.047 13812.972 - 13913.797: 59.1146% ( 157) 00:07:58.047 13913.797 - 14014.622: 61.0521% ( 186) 00:07:58.047 14014.622 - 14115.446: 62.7708% ( 165) 00:07:58.047 14115.446 - 14216.271: 64.4688% ( 163) 00:07:58.047 14216.271 - 14317.095: 66.2708% ( 173) 00:07:58.047 14317.095 - 14417.920: 68.1562% ( 181) 00:07:58.047 14417.920 - 14518.745: 69.7917% ( 157) 00:07:58.047 14518.745 - 14619.569: 71.3438% ( 149) 00:07:58.047 14619.569 - 14720.394: 72.7396% ( 134) 00:07:58.047 14720.394 - 14821.218: 74.3229% ( 152) 00:07:58.047 14821.218 - 14922.043: 75.7917% ( 141) 00:07:58.047 14922.043 - 15022.868: 77.0625% ( 122) 00:07:58.048 15022.868 - 15123.692: 78.3854% ( 127) 00:07:58.048 15123.692 - 15224.517: 79.8229% ( 138) 00:07:58.048 15224.517 - 15325.342: 81.3021% ( 142) 00:07:58.048 15325.342 - 15426.166: 82.7188% ( 136) 00:07:58.048 15426.166 - 15526.991: 84.0625% ( 129) 00:07:58.048 15526.991 - 15627.815: 85.4792% ( 136) 00:07:58.048 15627.815 - 15728.640: 86.7292% ( 120) 00:07:58.048 15728.640 - 15829.465: 87.7604% ( 99) 00:07:58.048 15829.465 - 15930.289: 88.8958% ( 109) 00:07:58.048 15930.289 - 16031.114: 89.9062% ( 97) 00:07:58.048 16031.114 - 16131.938: 90.6979% ( 76) 00:07:58.048 16131.938 - 16232.763: 91.4688% ( 74) 00:07:58.048 16232.763 - 16333.588: 92.1458% ( 65) 00:07:58.048 16333.588 - 16434.412: 92.8125% ( 64) 00:07:58.048 16434.412 - 16535.237: 93.3646% ( 53) 00:07:58.048 16535.237 - 16636.062: 93.8333% ( 45) 00:07:58.048 16636.062 - 16736.886: 94.2500% ( 40) 00:07:58.048 16736.886 - 16837.711: 94.6979% ( 43) 00:07:58.048 16837.711 - 16938.535: 95.0521% ( 34) 00:07:58.048 16938.535 - 17039.360: 95.3125% ( 25) 00:07:58.048 17039.360 - 17140.185: 95.5625% ( 24) 00:07:58.048 17140.185 - 17241.009: 95.8125% ( 24) 00:07:58.048 17241.009 - 17341.834: 96.0312% ( 21) 00:07:58.048 17341.834 - 17442.658: 96.1562% ( 12) 00:07:58.048 17442.658 - 17543.483: 96.3958% ( 23) 00:07:58.048 17543.483 - 17644.308: 96.5312% ( 13) 00:07:58.048 17644.308 - 17745.132: 96.7083% ( 17) 00:07:58.048 17745.132 - 17845.957: 96.8646% ( 15) 00:07:58.048 17845.957 - 17946.782: 96.9896% ( 12) 00:07:58.048 17946.782 - 18047.606: 97.1354% ( 14) 00:07:58.048 18047.606 - 18148.431: 97.2708% ( 13) 00:07:58.048 18148.431 - 18249.255: 97.4271% ( 15) 00:07:58.048 18249.255 - 18350.080: 97.5417% ( 11) 00:07:58.048 18350.080 - 18450.905: 97.5833% ( 4) 00:07:58.048 18450.905 - 18551.729: 97.6354% ( 5) 00:07:58.048 18551.729 - 18652.554: 97.6875% ( 5) 00:07:58.048 18652.554 - 18753.378: 97.7396% ( 5) 00:07:58.048 18753.378 - 18854.203: 97.7812% ( 4) 00:07:58.048 18854.203 - 18955.028: 97.8229% ( 4) 00:07:58.048 18955.028 - 19055.852: 97.8646% ( 4) 00:07:58.048 19055.852 - 19156.677: 97.9062% ( 4) 00:07:58.048 19156.677 - 19257.502: 97.9479% ( 4) 00:07:58.048 19257.502 - 19358.326: 97.9792% ( 3) 00:07:58.048 19358.326 - 19459.151: 98.0000% ( 2) 00:07:58.048 20669.046 - 20769.871: 98.0104% ( 1) 00:07:58.048 20769.871 - 20870.695: 98.0417% ( 3) 00:07:58.048 20870.695 - 20971.520: 98.0729% ( 3) 00:07:58.048 20971.520 - 21072.345: 98.1250% ( 5) 00:07:58.048 21072.345 - 21173.169: 98.1771% ( 5) 00:07:58.048 21173.169 - 21273.994: 98.2292% ( 5) 00:07:58.048 21273.994 - 21374.818: 98.2812% ( 5) 00:07:58.048 21374.818 - 21475.643: 98.3438% ( 6) 00:07:58.048 21475.643 - 21576.468: 98.3958% ( 5) 00:07:58.048 21576.468 - 21677.292: 98.4583% ( 6) 00:07:58.048 21677.292 - 21778.117: 98.5208% ( 6) 00:07:58.048 21778.117 - 21878.942: 98.5938% ( 7) 00:07:58.048 21878.942 - 21979.766: 98.6458% ( 5) 00:07:58.048 21979.766 - 22080.591: 98.6667% ( 2) 00:07:58.048 28230.892 - 28432.542: 98.6875% ( 2) 00:07:58.048 28432.542 - 28634.191: 98.8021% ( 11) 00:07:58.048 28634.191 - 28835.840: 98.9167% ( 11) 00:07:58.048 28835.840 - 29037.489: 99.0104% ( 9) 00:07:58.048 29037.489 - 29239.138: 99.1250% ( 11) 00:07:58.048 29239.138 - 29440.788: 99.2396% ( 11) 00:07:58.048 29440.788 - 29642.437: 99.3333% ( 9) 00:07:58.048 41539.742 - 41741.391: 99.3438% ( 1) 00:07:58.048 41741.391 - 41943.040: 99.3958% ( 5) 00:07:58.048 41943.040 - 42144.689: 99.4479% ( 5) 00:07:58.048 42144.689 - 42346.338: 99.5104% ( 6) 00:07:58.048 42346.338 - 42547.988: 99.5938% ( 8) 00:07:58.048 42547.988 - 42749.637: 99.6875% ( 9) 00:07:58.048 42749.637 - 42951.286: 99.7812% ( 9) 00:07:58.048 42951.286 - 43152.935: 99.8750% ( 9) 00:07:58.048 43152.935 - 43354.585: 99.9688% ( 9) 00:07:58.048 43354.585 - 43556.234: 100.0000% ( 3) 00:07:58.048 00:07:58.048 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.048 ============================================================================== 00:07:58.048 Range in us Cumulative IO count 00:07:58.048 6503.188 - 6553.600: 0.0207% ( 2) 00:07:58.048 6553.600 - 6604.012: 0.0414% ( 2) 00:07:58.048 6604.012 - 6654.425: 0.0724% ( 3) 00:07:58.048 6654.425 - 6704.837: 0.1035% ( 3) 00:07:58.048 6704.837 - 6755.249: 0.1242% ( 2) 00:07:58.048 6755.249 - 6805.662: 0.1345% ( 1) 00:07:58.048 6805.662 - 6856.074: 0.1656% ( 3) 00:07:58.048 6856.074 - 6906.486: 0.1759% ( 1) 00:07:58.048 6906.486 - 6956.898: 0.1966% ( 2) 00:07:58.048 6956.898 - 7007.311: 0.2276% ( 3) 00:07:58.048 7007.311 - 7057.723: 0.2483% ( 2) 00:07:58.048 7057.723 - 7108.135: 0.2690% ( 2) 00:07:58.048 7108.135 - 7158.548: 0.2897% ( 2) 00:07:58.048 7158.548 - 7208.960: 0.3104% ( 2) 00:07:58.048 7208.960 - 7259.372: 0.3415% ( 3) 00:07:58.048 7259.372 - 7309.785: 0.3622% ( 2) 00:07:58.048 7309.785 - 7360.197: 0.3829% ( 2) 00:07:58.048 7360.197 - 7410.609: 0.4036% ( 2) 00:07:58.048 7410.609 - 7461.022: 0.4243% ( 2) 00:07:58.048 7461.022 - 7511.434: 0.4450% ( 2) 00:07:58.048 7511.434 - 7561.846: 0.4760% ( 3) 00:07:58.048 7561.846 - 7612.258: 0.4967% ( 2) 00:07:58.048 7612.258 - 7662.671: 0.5174% ( 2) 00:07:58.048 7662.671 - 7713.083: 0.5381% ( 2) 00:07:58.048 7713.083 - 7763.495: 0.5691% ( 3) 00:07:58.048 7763.495 - 7813.908: 0.5898% ( 2) 00:07:58.048 7813.908 - 7864.320: 0.6105% ( 2) 00:07:58.048 7864.320 - 7914.732: 0.6312% ( 2) 00:07:58.048 7914.732 - 7965.145: 0.6623% ( 3) 00:07:58.048 9175.040 - 9225.452: 0.6726% ( 1) 00:07:58.048 9225.452 - 9275.865: 0.7036% ( 3) 00:07:58.048 9275.865 - 9326.277: 0.7864% ( 8) 00:07:58.048 9326.277 - 9376.689: 0.8589% ( 7) 00:07:58.048 9376.689 - 9427.102: 0.9623% ( 10) 00:07:58.048 9427.102 - 9477.514: 1.2210% ( 25) 00:07:58.048 9477.514 - 9527.926: 1.5004% ( 27) 00:07:58.048 9527.926 - 9578.338: 1.7074% ( 20) 00:07:58.048 9578.338 - 9628.751: 2.1213% ( 40) 00:07:58.048 9628.751 - 9679.163: 2.6594% ( 52) 00:07:58.048 9679.163 - 9729.575: 3.3423% ( 66) 00:07:58.048 9729.575 - 9779.988: 3.9425% ( 58) 00:07:58.048 9779.988 - 9830.400: 4.5944% ( 63) 00:07:58.048 9830.400 - 9880.812: 5.2670% ( 65) 00:07:58.048 9880.812 - 9931.225: 6.0120% ( 72) 00:07:58.048 9931.225 - 9981.637: 6.8916% ( 85) 00:07:58.048 9981.637 - 10032.049: 8.0815% ( 115) 00:07:58.048 10032.049 - 10082.462: 9.2819% ( 116) 00:07:58.048 10082.462 - 10132.874: 10.3787% ( 106) 00:07:58.048 10132.874 - 10183.286: 11.4756% ( 106) 00:07:58.048 10183.286 - 10233.698: 12.5621% ( 105) 00:07:58.048 10233.698 - 10284.111: 13.7003% ( 110) 00:07:58.048 10284.111 - 10334.523: 14.7972% ( 106) 00:07:58.048 10334.523 - 10384.935: 15.9354% ( 110) 00:07:58.048 10384.935 - 10435.348: 17.1668% ( 119) 00:07:58.048 10435.348 - 10485.760: 18.3154% ( 111) 00:07:58.048 10485.760 - 10536.172: 19.4329% ( 108) 00:07:58.048 10536.172 - 10586.585: 20.5091% ( 104) 00:07:58.048 10586.585 - 10636.997: 21.6267% ( 108) 00:07:58.048 10636.997 - 10687.409: 22.8373% ( 117) 00:07:58.048 10687.409 - 10737.822: 23.9963% ( 112) 00:07:58.048 10737.822 - 10788.234: 25.1138% ( 108) 00:07:58.048 10788.234 - 10838.646: 26.1382% ( 99) 00:07:58.048 10838.646 - 10889.058: 27.0695% ( 90) 00:07:58.048 10889.058 - 10939.471: 27.8042% ( 71) 00:07:58.048 10939.471 - 10989.883: 28.6734% ( 84) 00:07:58.048 10989.883 - 11040.295: 29.6047% ( 90) 00:07:58.048 11040.295 - 11090.708: 30.4222% ( 79) 00:07:58.048 11090.708 - 11141.120: 31.1258% ( 68) 00:07:58.048 11141.120 - 11191.532: 31.8398% ( 69) 00:07:58.048 11191.532 - 11241.945: 32.5745% ( 71) 00:07:58.048 11241.945 - 11292.357: 33.2161% ( 62) 00:07:58.048 11292.357 - 11342.769: 33.8473% ( 61) 00:07:58.048 11342.769 - 11393.182: 34.4681% ( 60) 00:07:58.048 11393.182 - 11443.594: 35.0166% ( 53) 00:07:58.048 11443.594 - 11494.006: 35.5546% ( 52) 00:07:58.048 11494.006 - 11544.418: 36.0410% ( 47) 00:07:58.048 11544.418 - 11594.831: 36.4963% ( 44) 00:07:58.048 11594.831 - 11645.243: 37.0757% ( 56) 00:07:58.048 11645.243 - 11695.655: 37.4897% ( 40) 00:07:58.048 11695.655 - 11746.068: 38.0277% ( 52) 00:07:58.048 11746.068 - 11796.480: 38.4727% ( 43) 00:07:58.048 11796.480 - 11846.892: 38.8762% ( 39) 00:07:58.048 11846.892 - 11897.305: 39.2488% ( 36) 00:07:58.048 11897.305 - 11947.717: 39.6627% ( 40) 00:07:58.048 11947.717 - 11998.129: 39.9731% ( 30) 00:07:58.048 11998.129 - 12048.542: 40.3042% ( 32) 00:07:58.048 12048.542 - 12098.954: 40.5836% ( 27) 00:07:58.048 12098.954 - 12149.366: 40.8526% ( 26) 00:07:58.048 12149.366 - 12199.778: 41.1010% ( 24) 00:07:58.048 12199.778 - 12250.191: 41.3597% ( 25) 00:07:58.048 12250.191 - 12300.603: 41.6184% ( 25) 00:07:58.048 12300.603 - 12351.015: 41.8771% ( 25) 00:07:58.048 12351.015 - 12401.428: 42.2082% ( 32) 00:07:58.048 12401.428 - 12451.840: 42.4669% ( 25) 00:07:58.048 12451.840 - 12502.252: 42.8187% ( 34) 00:07:58.048 12502.252 - 12552.665: 43.1188% ( 29) 00:07:58.048 12552.665 - 12603.077: 43.3671% ( 24) 00:07:58.048 12603.077 - 12653.489: 43.6258% ( 25) 00:07:58.048 12653.489 - 12703.902: 43.9052% ( 27) 00:07:58.048 12703.902 - 12754.314: 44.2260% ( 31) 00:07:58.048 12754.314 - 12804.726: 44.4329% ( 20) 00:07:58.048 12804.726 - 12855.138: 44.7434% ( 30) 00:07:58.048 12855.138 - 12905.551: 45.1055% ( 35) 00:07:58.048 12905.551 - 13006.375: 46.1714% ( 103) 00:07:58.048 13006.375 - 13107.200: 47.0302% ( 83) 00:07:58.049 13107.200 - 13208.025: 47.9615% ( 90) 00:07:58.049 13208.025 - 13308.849: 49.1929% ( 119) 00:07:58.049 13308.849 - 13409.674: 50.5795% ( 134) 00:07:58.049 13409.674 - 13510.498: 52.2144% ( 158) 00:07:58.049 13510.498 - 13611.323: 53.9528% ( 168) 00:07:58.049 13611.323 - 13712.148: 55.5981% ( 159) 00:07:58.049 13712.148 - 13812.972: 57.3055% ( 165) 00:07:58.049 13812.972 - 13913.797: 59.1577% ( 179) 00:07:58.049 13913.797 - 14014.622: 61.0410% ( 182) 00:07:58.049 14014.622 - 14115.446: 62.9553% ( 185) 00:07:58.049 14115.446 - 14216.271: 64.7661% ( 175) 00:07:58.049 14216.271 - 14317.095: 66.4321% ( 161) 00:07:58.049 14317.095 - 14417.920: 67.9325% ( 145) 00:07:58.049 14417.920 - 14518.745: 69.2777% ( 130) 00:07:58.049 14518.745 - 14619.569: 70.7161% ( 139) 00:07:58.049 14619.569 - 14720.394: 72.4545% ( 168) 00:07:58.049 14720.394 - 14821.218: 74.1722% ( 166) 00:07:58.049 14821.218 - 14922.043: 75.6623% ( 144) 00:07:58.049 14922.043 - 15022.868: 76.9971% ( 129) 00:07:58.049 15022.868 - 15123.692: 78.2595% ( 122) 00:07:58.049 15123.692 - 15224.517: 79.6875% ( 138) 00:07:58.049 15224.517 - 15325.342: 81.1155% ( 138) 00:07:58.049 15325.342 - 15426.166: 82.5538% ( 139) 00:07:58.049 15426.166 - 15526.991: 83.7231% ( 113) 00:07:58.049 15526.991 - 15627.815: 84.9648% ( 120) 00:07:58.049 15627.815 - 15728.640: 85.9685% ( 97) 00:07:58.049 15728.640 - 15829.465: 86.9205% ( 92) 00:07:58.049 15829.465 - 15930.289: 87.8415% ( 89) 00:07:58.049 15930.289 - 16031.114: 88.9280% ( 105) 00:07:58.049 16031.114 - 16131.938: 89.8696% ( 91) 00:07:58.049 16131.938 - 16232.763: 90.6767% ( 78) 00:07:58.049 16232.763 - 16333.588: 91.5563% ( 85) 00:07:58.049 16333.588 - 16434.412: 92.2082% ( 63) 00:07:58.049 16434.412 - 16535.237: 92.8498% ( 62) 00:07:58.049 16535.237 - 16636.062: 93.3568% ( 49) 00:07:58.049 16636.062 - 16736.886: 93.7914% ( 42) 00:07:58.049 16736.886 - 16837.711: 94.3295% ( 52) 00:07:58.049 16837.711 - 16938.535: 94.7848% ( 44) 00:07:58.049 16938.535 - 17039.360: 95.0435% ( 25) 00:07:58.049 17039.360 - 17140.185: 95.2711% ( 22) 00:07:58.049 17140.185 - 17241.009: 95.5091% ( 23) 00:07:58.049 17241.009 - 17341.834: 95.7368% ( 22) 00:07:58.049 17341.834 - 17442.658: 95.9127% ( 17) 00:07:58.049 17442.658 - 17543.483: 96.0782% ( 16) 00:07:58.049 17543.483 - 17644.308: 96.2541% ( 17) 00:07:58.049 17644.308 - 17745.132: 96.4094% ( 15) 00:07:58.049 17745.132 - 17845.957: 96.5335% ( 12) 00:07:58.049 17845.957 - 17946.782: 96.6474% ( 11) 00:07:58.049 17946.782 - 18047.606: 96.6887% ( 4) 00:07:58.049 18249.255 - 18350.080: 96.7198% ( 3) 00:07:58.049 18350.080 - 18450.905: 96.7922% ( 7) 00:07:58.049 18450.905 - 18551.729: 96.9060% ( 11) 00:07:58.049 18551.729 - 18652.554: 96.9888% ( 8) 00:07:58.049 18652.554 - 18753.378: 97.0199% ( 3) 00:07:58.049 18753.378 - 18854.203: 97.0716% ( 5) 00:07:58.049 18854.203 - 18955.028: 97.1337% ( 6) 00:07:58.049 18955.028 - 19055.852: 97.1958% ( 6) 00:07:58.049 19055.852 - 19156.677: 97.2682% ( 7) 00:07:58.049 19156.677 - 19257.502: 97.3200% ( 5) 00:07:58.049 19257.502 - 19358.326: 97.3510% ( 3) 00:07:58.049 19761.625 - 19862.449: 97.3820% ( 3) 00:07:58.049 19862.449 - 19963.274: 97.4441% ( 6) 00:07:58.049 19963.274 - 20064.098: 97.4959% ( 5) 00:07:58.049 20064.098 - 20164.923: 97.6097% ( 11) 00:07:58.049 20164.923 - 20265.748: 97.7132% ( 10) 00:07:58.049 20265.748 - 20366.572: 97.8270% ( 11) 00:07:58.049 20366.572 - 20467.397: 97.9512% ( 12) 00:07:58.049 20467.397 - 20568.222: 98.0650% ( 11) 00:07:58.049 20568.222 - 20669.046: 98.1788% ( 11) 00:07:58.049 20669.046 - 20769.871: 98.3030% ( 12) 00:07:58.049 20769.871 - 20870.695: 98.4582% ( 15) 00:07:58.049 20870.695 - 20971.520: 98.6548% ( 19) 00:07:58.049 20971.520 - 21072.345: 98.7790% ( 12) 00:07:58.049 21072.345 - 21173.169: 98.8721% ( 9) 00:07:58.049 21173.169 - 21273.994: 98.9238% ( 5) 00:07:58.049 21273.994 - 21374.818: 98.9756% ( 5) 00:07:58.049 21374.818 - 21475.643: 99.0273% ( 5) 00:07:58.049 21475.643 - 21576.468: 99.0791% ( 5) 00:07:58.049 21576.468 - 21677.292: 99.1411% ( 6) 00:07:58.049 21677.292 - 21778.117: 99.2032% ( 6) 00:07:58.049 21778.117 - 21878.942: 99.2550% ( 5) 00:07:58.049 21878.942 - 21979.766: 99.3171% ( 6) 00:07:58.049 21979.766 - 22080.591: 99.3377% ( 2) 00:07:58.049 29440.788 - 29642.437: 99.3998% ( 6) 00:07:58.049 29642.437 - 29844.086: 99.5137% ( 11) 00:07:58.049 29844.086 - 30045.735: 99.6275% ( 11) 00:07:58.049 30045.735 - 30247.385: 99.7517% ( 12) 00:07:58.049 30247.385 - 30449.034: 99.8655% ( 11) 00:07:58.049 30449.034 - 30650.683: 99.9690% ( 10) 00:07:58.049 30650.683 - 30852.332: 100.0000% ( 3) 00:07:58.049 00:07:58.049 03:08:01 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:58.986 Initializing NVMe Controllers 00:07:58.986 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.986 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.986 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.986 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.986 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:58.986 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:58.986 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:58.986 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:58.986 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:58.986 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:58.986 Initialization complete. Launching workers. 00:07:58.986 ======================================================== 00:07:58.986 Latency(us) 00:07:58.986 Device Information : IOPS MiB/s Average min max 00:07:58.986 PCIE (0000:00:11.0) NSID 1 from core 0: 16955.46 198.70 7553.12 5511.56 24863.80 00:07:58.986 PCIE (0000:00:13.0) NSID 1 from core 0: 16955.46 198.70 7547.00 5196.92 24142.64 00:07:58.986 PCIE (0000:00:10.0) NSID 1 from core 0: 16955.46 198.70 7540.15 4757.09 24786.85 00:07:58.986 PCIE (0000:00:12.0) NSID 1 from core 0: 16955.46 198.70 7533.48 4490.26 24029.46 00:07:58.986 PCIE (0000:00:12.0) NSID 2 from core 0: 16955.46 198.70 7527.27 3650.93 24603.54 00:07:58.986 PCIE (0000:00:12.0) NSID 3 from core 0: 16955.46 198.70 7520.90 3401.01 24195.92 00:07:58.986 ======================================================== 00:07:58.986 Total : 101732.74 1192.18 7536.98 3401.01 24863.80 00:07:58.986 00:07:58.986 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.986 ================================================================================= 00:07:58.986 1.00000% : 6200.714us 00:07:58.986 10.00000% : 6452.775us 00:07:58.986 25.00000% : 6654.425us 00:07:58.987 50.00000% : 6906.486us 00:07:58.987 75.00000% : 8065.969us 00:07:58.987 90.00000% : 9275.865us 00:07:58.987 95.00000% : 10737.822us 00:07:58.987 98.00000% : 12451.840us 00:07:58.987 99.00000% : 13611.323us 00:07:58.987 99.50000% : 16837.711us 00:07:58.987 99.90000% : 24702.031us 00:07:58.987 99.99000% : 24903.680us 00:07:58.987 99.99900% : 24903.680us 00:07:58.987 99.99990% : 24903.680us 00:07:58.987 99.99999% : 24903.680us 00:07:58.987 00:07:58.987 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.987 ================================================================================= 00:07:58.987 1.00000% : 6175.508us 00:07:58.987 10.00000% : 6452.775us 00:07:58.987 25.00000% : 6654.425us 00:07:58.987 50.00000% : 6906.486us 00:07:58.987 75.00000% : 8015.557us 00:07:58.987 90.00000% : 9326.277us 00:07:58.987 95.00000% : 10284.111us 00:07:58.987 98.00000% : 12199.778us 00:07:58.987 99.00000% : 13611.323us 00:07:58.987 99.50000% : 17946.782us 00:07:58.987 99.90000% : 23895.434us 00:07:58.987 99.99000% : 24197.908us 00:07:58.987 99.99900% : 24197.908us 00:07:58.987 99.99990% : 24197.908us 00:07:58.987 99.99999% : 24197.908us 00:07:58.987 00:07:58.987 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.987 ================================================================================= 00:07:58.987 1.00000% : 6024.271us 00:07:58.987 10.00000% : 6402.363us 00:07:58.987 25.00000% : 6654.425us 00:07:58.987 50.00000% : 6956.898us 00:07:58.987 75.00000% : 7965.145us 00:07:58.987 90.00000% : 9326.277us 00:07:58.987 95.00000% : 10435.348us 00:07:58.987 98.00000% : 12300.603us 00:07:58.987 99.00000% : 13712.148us 00:07:58.987 99.50000% : 18148.431us 00:07:58.987 99.90000% : 24097.083us 00:07:58.987 99.99000% : 24802.855us 00:07:58.987 99.99900% : 24802.855us 00:07:58.987 99.99990% : 24802.855us 00:07:58.987 99.99999% : 24802.855us 00:07:58.987 00:07:58.987 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.987 ================================================================================= 00:07:58.987 1.00000% : 6175.508us 00:07:58.987 10.00000% : 6452.775us 00:07:58.987 25.00000% : 6654.425us 00:07:58.987 50.00000% : 6906.486us 00:07:58.987 75.00000% : 8065.969us 00:07:58.987 90.00000% : 9074.215us 00:07:58.987 95.00000% : 10687.409us 00:07:58.987 98.00000% : 12250.191us 00:07:58.987 99.00000% : 13308.849us 00:07:58.987 99.50000% : 17845.957us 00:07:58.987 99.90000% : 23693.785us 00:07:58.987 99.99000% : 24097.083us 00:07:58.987 99.99900% : 24097.083us 00:07:58.987 99.99990% : 24097.083us 00:07:58.987 99.99999% : 24097.083us 00:07:58.987 00:07:58.987 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.987 ================================================================================= 00:07:58.987 1.00000% : 6175.508us 00:07:58.987 10.00000% : 6452.775us 00:07:58.987 25.00000% : 6654.425us 00:07:58.987 50.00000% : 6956.898us 00:07:58.987 75.00000% : 8015.557us 00:07:58.987 90.00000% : 9023.803us 00:07:58.987 95.00000% : 10636.997us 00:07:58.987 98.00000% : 12300.603us 00:07:58.987 99.00000% : 13308.849us 00:07:58.987 99.50000% : 17543.483us 00:07:58.987 99.90000% : 24298.732us 00:07:58.987 99.99000% : 24601.206us 00:07:58.987 99.99900% : 24702.031us 00:07:58.987 99.99990% : 24702.031us 00:07:58.987 99.99999% : 24702.031us 00:07:58.987 00:07:58.987 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.987 ================================================================================= 00:07:58.987 1.00000% : 6150.302us 00:07:58.987 10.00000% : 6452.775us 00:07:58.987 25.00000% : 6654.425us 00:07:58.987 50.00000% : 6906.486us 00:07:58.987 75.00000% : 8015.557us 00:07:58.987 90.00000% : 9023.803us 00:07:58.987 95.00000% : 10636.997us 00:07:58.987 98.00000% : 12351.015us 00:07:58.987 99.00000% : 13208.025us 00:07:58.987 99.50000% : 17543.483us 00:07:58.987 99.90000% : 23794.609us 00:07:58.987 99.99000% : 24197.908us 00:07:58.987 99.99900% : 24197.908us 00:07:58.987 99.99990% : 24197.908us 00:07:58.987 99.99999% : 24197.908us 00:07:58.987 00:07:58.987 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.987 ============================================================================== 00:07:58.987 Range in us Cumulative IO count 00:07:58.987 5494.942 - 5520.148: 0.0118% ( 2) 00:07:58.987 5520.148 - 5545.354: 0.0413% ( 5) 00:07:58.987 5545.354 - 5570.560: 0.0825% ( 7) 00:07:58.987 5570.560 - 5595.766: 0.1415% ( 10) 00:07:58.987 5595.766 - 5620.972: 0.1946% ( 9) 00:07:58.987 5620.972 - 5646.178: 0.2123% ( 3) 00:07:58.987 5646.178 - 5671.385: 0.2241% ( 2) 00:07:58.987 5671.385 - 5696.591: 0.2358% ( 2) 00:07:58.987 5696.591 - 5721.797: 0.2535% ( 3) 00:07:58.987 5721.797 - 5747.003: 0.2653% ( 2) 00:07:58.987 5747.003 - 5772.209: 0.2830% ( 3) 00:07:58.987 5772.209 - 5797.415: 0.3007% ( 3) 00:07:58.987 5797.415 - 5822.622: 0.3184% ( 3) 00:07:58.987 5822.622 - 5847.828: 0.3302% ( 2) 00:07:58.987 5847.828 - 5873.034: 0.3420% ( 2) 00:07:58.987 5873.034 - 5898.240: 0.3656% ( 4) 00:07:58.987 5898.240 - 5923.446: 0.3833% ( 3) 00:07:58.987 5923.446 - 5948.652: 0.3950% ( 2) 00:07:58.987 5948.652 - 5973.858: 0.4068% ( 2) 00:07:58.987 5973.858 - 5999.065: 0.4363% ( 5) 00:07:58.987 5999.065 - 6024.271: 0.4776% ( 7) 00:07:58.987 6024.271 - 6049.477: 0.5189% ( 7) 00:07:58.987 6049.477 - 6074.683: 0.5719% ( 9) 00:07:58.987 6074.683 - 6099.889: 0.6309% ( 10) 00:07:58.987 6099.889 - 6125.095: 0.7842% ( 26) 00:07:58.987 6125.095 - 6150.302: 0.8667% ( 14) 00:07:58.987 6150.302 - 6175.508: 0.9493% ( 14) 00:07:58.987 6175.508 - 6200.714: 1.1616% ( 36) 00:07:58.987 6200.714 - 6225.920: 1.3797% ( 37) 00:07:58.987 6225.920 - 6251.126: 1.6333% ( 43) 00:07:58.987 6251.126 - 6276.332: 2.1108% ( 81) 00:07:58.987 6276.332 - 6301.538: 2.7712% ( 112) 00:07:58.987 6301.538 - 6326.745: 3.3726% ( 102) 00:07:58.987 6326.745 - 6351.951: 4.1097% ( 125) 00:07:58.987 6351.951 - 6377.157: 5.5248% ( 240) 00:07:58.987 6377.157 - 6402.363: 7.1639% ( 278) 00:07:58.987 6402.363 - 6427.569: 9.2217% ( 349) 00:07:58.987 6427.569 - 6452.775: 10.9080% ( 286) 00:07:58.987 6452.775 - 6503.188: 14.8290% ( 665) 00:07:58.987 6503.188 - 6553.600: 19.7288% ( 831) 00:07:58.987 6553.600 - 6604.012: 24.6050% ( 827) 00:07:58.987 6604.012 - 6654.425: 29.8526% ( 890) 00:07:58.987 6654.425 - 6704.837: 34.9941% ( 872) 00:07:58.987 6704.837 - 6755.249: 39.1981% ( 713) 00:07:58.987 6755.249 - 6805.662: 43.0837% ( 659) 00:07:58.987 6805.662 - 6856.074: 47.1639% ( 692) 00:07:58.987 6856.074 - 6906.486: 50.4717% ( 561) 00:07:58.987 6906.486 - 6956.898: 53.2901% ( 478) 00:07:58.987 6956.898 - 7007.311: 55.6427% ( 399) 00:07:58.987 7007.311 - 7057.723: 57.3349% ( 287) 00:07:58.987 7057.723 - 7108.135: 58.6380% ( 221) 00:07:58.987 7108.135 - 7158.548: 60.1238% ( 252) 00:07:58.987 7158.548 - 7208.960: 61.7099% ( 269) 00:07:58.987 7208.960 - 7259.372: 63.0012% ( 219) 00:07:58.987 7259.372 - 7309.785: 63.9210% ( 156) 00:07:58.987 7309.785 - 7360.197: 65.2830% ( 231) 00:07:58.987 7360.197 - 7410.609: 66.0849% ( 136) 00:07:58.987 7410.609 - 7461.022: 66.8514% ( 130) 00:07:58.987 7461.022 - 7511.434: 67.5000% ( 110) 00:07:58.987 7511.434 - 7561.846: 68.2960% ( 135) 00:07:58.987 7561.846 - 7612.258: 69.1156% ( 139) 00:07:58.987 7612.258 - 7662.671: 69.6108% ( 84) 00:07:58.987 7662.671 - 7713.083: 70.2241% ( 104) 00:07:58.987 7713.083 - 7763.495: 70.8667% ( 109) 00:07:58.987 7763.495 - 7813.908: 71.5212% ( 111) 00:07:58.987 7813.908 - 7864.320: 72.1698% ( 110) 00:07:58.987 7864.320 - 7914.732: 73.1663% ( 169) 00:07:58.987 7914.732 - 7965.145: 73.9328% ( 130) 00:07:58.987 7965.145 - 8015.557: 74.8467% ( 155) 00:07:58.987 8015.557 - 8065.969: 75.8608% ( 172) 00:07:58.987 8065.969 - 8116.382: 76.9693% ( 188) 00:07:58.987 8116.382 - 8166.794: 78.1191% ( 195) 00:07:58.987 8166.794 - 8217.206: 79.3101% ( 202) 00:07:58.987 8217.206 - 8267.618: 80.4835% ( 199) 00:07:58.987 8267.618 - 8318.031: 81.4269% ( 160) 00:07:58.987 8318.031 - 8368.443: 82.3585% ( 158) 00:07:58.987 8368.443 - 8418.855: 83.2901% ( 158) 00:07:58.987 8418.855 - 8469.268: 83.9858% ( 118) 00:07:58.987 8469.268 - 8519.680: 84.8113% ( 140) 00:07:58.987 8519.680 - 8570.092: 85.4481% ( 108) 00:07:58.987 8570.092 - 8620.505: 85.7842% ( 57) 00:07:58.987 8620.505 - 8670.917: 86.1321% ( 59) 00:07:58.987 8670.917 - 8721.329: 86.4151% ( 48) 00:07:58.987 8721.329 - 8771.742: 86.8396% ( 72) 00:07:58.987 8771.742 - 8822.154: 87.2759% ( 74) 00:07:58.987 8822.154 - 8872.566: 87.6356% ( 61) 00:07:58.987 8872.566 - 8922.978: 88.0189% ( 65) 00:07:58.987 8922.978 - 8973.391: 88.2311% ( 36) 00:07:58.987 8973.391 - 9023.803: 88.6203% ( 66) 00:07:58.987 9023.803 - 9074.215: 89.0389% ( 71) 00:07:58.987 9074.215 - 9124.628: 89.4517% ( 70) 00:07:58.987 9124.628 - 9175.040: 89.7288% ( 47) 00:07:58.987 9175.040 - 9225.452: 89.9410% ( 36) 00:07:58.987 9225.452 - 9275.865: 90.1592% ( 37) 00:07:58.987 9275.865 - 9326.277: 90.3950% ( 40) 00:07:58.987 9326.277 - 9376.689: 90.7901% ( 67) 00:07:58.987 9376.689 - 9427.102: 90.8844% ( 16) 00:07:58.987 9427.102 - 9477.514: 90.9965% ( 19) 00:07:58.987 9477.514 - 9527.926: 91.1085% ( 19) 00:07:58.987 9527.926 - 9578.338: 91.3974% ( 49) 00:07:58.988 9578.338 - 9628.751: 91.6804% ( 48) 00:07:58.988 9628.751 - 9679.163: 91.8750% ( 33) 00:07:58.988 9679.163 - 9729.575: 92.1993% ( 55) 00:07:58.988 9729.575 - 9779.988: 92.5413% ( 58) 00:07:58.988 9779.988 - 9830.400: 92.8243% ( 48) 00:07:58.988 9830.400 - 9880.812: 93.0071% ( 31) 00:07:58.988 9880.812 - 9931.225: 93.1840% ( 30) 00:07:58.988 9931.225 - 9981.637: 93.3550% ( 29) 00:07:58.988 9981.637 - 10032.049: 93.5436% ( 32) 00:07:58.988 10032.049 - 10082.462: 93.7559% ( 36) 00:07:58.988 10082.462 - 10132.874: 93.9387% ( 31) 00:07:58.988 10132.874 - 10183.286: 94.0802% ( 24) 00:07:58.988 10183.286 - 10233.698: 94.2099% ( 22) 00:07:58.988 10233.698 - 10284.111: 94.3278% ( 20) 00:07:58.988 10284.111 - 10334.523: 94.4163% ( 15) 00:07:58.988 10334.523 - 10384.935: 94.4929% ( 13) 00:07:58.988 10384.935 - 10435.348: 94.5637% ( 12) 00:07:58.988 10435.348 - 10485.760: 94.6521% ( 15) 00:07:58.988 10485.760 - 10536.172: 94.7052% ( 9) 00:07:58.988 10536.172 - 10586.585: 94.7524% ( 8) 00:07:58.988 10586.585 - 10636.997: 94.8172% ( 11) 00:07:58.988 10636.997 - 10687.409: 94.9646% ( 25) 00:07:58.988 10687.409 - 10737.822: 95.0649% ( 17) 00:07:58.988 10737.822 - 10788.234: 95.1238% ( 10) 00:07:58.988 10788.234 - 10838.646: 95.1769% ( 9) 00:07:58.988 10838.646 - 10889.058: 95.2535% ( 13) 00:07:58.988 10889.058 - 10939.471: 95.3302% ( 13) 00:07:58.988 10939.471 - 10989.883: 95.3715% ( 7) 00:07:58.988 10989.883 - 11040.295: 95.4127% ( 7) 00:07:58.988 11040.295 - 11090.708: 95.4658% ( 9) 00:07:58.988 11090.708 - 11141.120: 95.5307% ( 11) 00:07:58.988 11141.120 - 11191.532: 95.6073% ( 13) 00:07:58.988 11191.532 - 11241.945: 95.6722% ( 11) 00:07:58.988 11241.945 - 11292.357: 95.7370% ( 11) 00:07:58.988 11292.357 - 11342.769: 95.7960% ( 10) 00:07:58.988 11342.769 - 11393.182: 95.9552% ( 27) 00:07:58.988 11393.182 - 11443.594: 96.1969% ( 41) 00:07:58.988 11443.594 - 11494.006: 96.4210% ( 38) 00:07:58.988 11494.006 - 11544.418: 96.5389% ( 20) 00:07:58.988 11544.418 - 11594.831: 96.6215% ( 14) 00:07:58.988 11594.831 - 11645.243: 96.7158% ( 16) 00:07:58.988 11645.243 - 11695.655: 96.8160% ( 17) 00:07:58.988 11695.655 - 11746.068: 96.8927% ( 13) 00:07:58.988 11746.068 - 11796.480: 96.9634% ( 12) 00:07:58.988 11796.480 - 11846.892: 97.0342% ( 12) 00:07:58.988 11846.892 - 11897.305: 97.0932% ( 10) 00:07:58.988 11897.305 - 11947.717: 97.1521% ( 10) 00:07:58.988 11947.717 - 11998.129: 97.2465% ( 16) 00:07:58.988 11998.129 - 12048.542: 97.3467% ( 17) 00:07:58.988 12048.542 - 12098.954: 97.4705% ( 21) 00:07:58.988 12098.954 - 12149.366: 97.6238% ( 26) 00:07:58.988 12149.366 - 12199.778: 97.7241% ( 17) 00:07:58.988 12199.778 - 12250.191: 97.7653% ( 7) 00:07:58.988 12250.191 - 12300.603: 97.8007% ( 6) 00:07:58.988 12300.603 - 12351.015: 97.8538% ( 9) 00:07:58.988 12351.015 - 12401.428: 97.9186% ( 11) 00:07:58.988 12401.428 - 12451.840: 98.0012% ( 14) 00:07:58.988 12451.840 - 12502.252: 98.0601% ( 10) 00:07:58.988 12502.252 - 12552.665: 98.1427% ( 14) 00:07:58.988 12552.665 - 12603.077: 98.2606% ( 20) 00:07:58.988 12603.077 - 12653.489: 98.3255% ( 11) 00:07:58.988 12653.489 - 12703.902: 98.3726% ( 8) 00:07:58.988 12703.902 - 12754.314: 98.4257% ( 9) 00:07:58.988 12754.314 - 12804.726: 98.4552% ( 5) 00:07:58.988 12804.726 - 12855.138: 98.5024% ( 8) 00:07:58.988 12855.138 - 12905.551: 98.5142% ( 2) 00:07:58.988 12905.551 - 13006.375: 98.5731% ( 10) 00:07:58.988 13006.375 - 13107.200: 98.6321% ( 10) 00:07:58.988 13107.200 - 13208.025: 98.6969% ( 11) 00:07:58.988 13208.025 - 13308.849: 98.7618% ( 11) 00:07:58.988 13308.849 - 13409.674: 98.8090% ( 8) 00:07:58.988 13409.674 - 13510.498: 98.9446% ( 23) 00:07:58.988 13510.498 - 13611.323: 99.0566% ( 19) 00:07:58.988 13611.323 - 13712.148: 99.1981% ( 24) 00:07:58.988 13712.148 - 13812.972: 99.2217% ( 4) 00:07:58.988 13812.972 - 13913.797: 99.2394% ( 3) 00:07:58.988 13913.797 - 14014.622: 99.2453% ( 1) 00:07:58.988 16232.763 - 16333.588: 99.2807% ( 6) 00:07:58.988 16333.588 - 16434.412: 99.3455% ( 11) 00:07:58.988 16434.412 - 16535.237: 99.4104% ( 11) 00:07:58.988 16535.237 - 16636.062: 99.4693% ( 10) 00:07:58.988 16636.062 - 16736.886: 99.4988% ( 5) 00:07:58.988 16736.886 - 16837.711: 99.5283% ( 5) 00:07:58.988 16837.711 - 16938.535: 99.5637% ( 6) 00:07:58.988 16938.535 - 17039.360: 99.5991% ( 6) 00:07:58.988 17039.360 - 17140.185: 99.6226% ( 4) 00:07:58.988 23088.837 - 23189.662: 99.6285% ( 1) 00:07:58.988 23189.662 - 23290.486: 99.6521% ( 4) 00:07:58.988 23290.486 - 23391.311: 99.6639% ( 2) 00:07:58.988 23391.311 - 23492.135: 99.7052% ( 7) 00:07:58.988 23492.135 - 23592.960: 99.7170% ( 2) 00:07:58.988 23592.960 - 23693.785: 99.7288% ( 2) 00:07:58.988 23693.785 - 23794.609: 99.7465% ( 3) 00:07:58.988 23794.609 - 23895.434: 99.7642% ( 3) 00:07:58.988 23895.434 - 23996.258: 99.7818% ( 3) 00:07:58.988 23996.258 - 24097.083: 99.8054% ( 4) 00:07:58.988 24097.083 - 24197.908: 99.8231% ( 3) 00:07:58.988 24197.908 - 24298.732: 99.8408% ( 3) 00:07:58.988 24298.732 - 24399.557: 99.8644% ( 4) 00:07:58.988 24399.557 - 24500.382: 99.8762% ( 2) 00:07:58.988 24500.382 - 24601.206: 99.8880% ( 2) 00:07:58.988 24601.206 - 24702.031: 99.9292% ( 7) 00:07:58.988 24702.031 - 24802.855: 99.9705% ( 7) 00:07:58.988 24802.855 - 24903.680: 100.0000% ( 5) 00:07:58.988 00:07:58.988 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.988 ============================================================================== 00:07:58.988 Range in us Cumulative IO count 00:07:58.988 5192.468 - 5217.674: 0.0118% ( 2) 00:07:58.988 5217.674 - 5242.880: 0.0236% ( 2) 00:07:58.988 5242.880 - 5268.086: 0.0413% ( 3) 00:07:58.988 5268.086 - 5293.292: 0.0531% ( 2) 00:07:58.988 5293.292 - 5318.498: 0.0884% ( 6) 00:07:58.988 5318.498 - 5343.705: 0.1120% ( 4) 00:07:58.988 5343.705 - 5368.911: 0.2123% ( 17) 00:07:58.988 5368.911 - 5394.117: 0.2712% ( 10) 00:07:58.988 5394.117 - 5419.323: 0.3007% ( 5) 00:07:58.988 5419.323 - 5444.529: 0.3125% ( 2) 00:07:58.988 5444.529 - 5469.735: 0.3243% ( 2) 00:07:58.988 5469.735 - 5494.942: 0.3420% ( 3) 00:07:58.988 5494.942 - 5520.148: 0.3538% ( 2) 00:07:58.988 5520.148 - 5545.354: 0.3656% ( 2) 00:07:58.988 5545.354 - 5570.560: 0.3715% ( 1) 00:07:58.988 5570.560 - 5595.766: 0.3774% ( 1) 00:07:58.988 5797.415 - 5822.622: 0.3833% ( 1) 00:07:58.988 5898.240 - 5923.446: 0.3892% ( 1) 00:07:58.988 5923.446 - 5948.652: 0.4009% ( 2) 00:07:58.988 5973.858 - 5999.065: 0.4186% ( 3) 00:07:58.988 5999.065 - 6024.271: 0.4599% ( 7) 00:07:58.988 6024.271 - 6049.477: 0.4953% ( 6) 00:07:58.988 6049.477 - 6074.683: 0.5307% ( 6) 00:07:58.988 6074.683 - 6099.889: 0.5896% ( 10) 00:07:58.988 6099.889 - 6125.095: 0.6604% ( 12) 00:07:58.988 6125.095 - 6150.302: 0.7724% ( 19) 00:07:58.988 6150.302 - 6175.508: 1.0849% ( 53) 00:07:58.988 6175.508 - 6200.714: 1.2618% ( 30) 00:07:58.988 6200.714 - 6225.920: 1.4741% ( 36) 00:07:58.988 6225.920 - 6251.126: 1.9399% ( 79) 00:07:58.988 6251.126 - 6276.332: 2.5590% ( 105) 00:07:58.988 6276.332 - 6301.538: 3.1309% ( 97) 00:07:58.988 6301.538 - 6326.745: 3.8915% ( 129) 00:07:58.988 6326.745 - 6351.951: 4.6285% ( 125) 00:07:58.988 6351.951 - 6377.157: 5.9552% ( 225) 00:07:58.988 6377.157 - 6402.363: 7.3172% ( 231) 00:07:58.988 6402.363 - 6427.569: 9.0861% ( 300) 00:07:58.988 6427.569 - 6452.775: 10.8844% ( 305) 00:07:58.988 6452.775 - 6503.188: 14.4811% ( 610) 00:07:58.988 6503.188 - 6553.600: 18.3608% ( 658) 00:07:58.988 6553.600 - 6604.012: 23.3255% ( 842) 00:07:58.988 6604.012 - 6654.425: 29.0625% ( 973) 00:07:58.988 6654.425 - 6704.837: 34.7642% ( 967) 00:07:58.988 6704.837 - 6755.249: 39.0448% ( 726) 00:07:58.988 6755.249 - 6805.662: 43.6262% ( 777) 00:07:58.988 6805.662 - 6856.074: 48.0012% ( 742) 00:07:58.988 6856.074 - 6906.486: 51.3738% ( 572) 00:07:58.988 6906.486 - 6956.898: 54.0389% ( 452) 00:07:58.988 6956.898 - 7007.311: 56.1792% ( 363) 00:07:58.988 7007.311 - 7057.723: 58.2193% ( 346) 00:07:58.988 7057.723 - 7108.135: 59.7700% ( 263) 00:07:58.988 7108.135 - 7158.548: 61.1498% ( 234) 00:07:58.988 7158.548 - 7208.960: 62.2759% ( 191) 00:07:58.988 7208.960 - 7259.372: 63.3314% ( 179) 00:07:58.988 7259.372 - 7309.785: 64.3514% ( 173) 00:07:58.988 7309.785 - 7360.197: 65.3950% ( 177) 00:07:58.988 7360.197 - 7410.609: 66.3797% ( 167) 00:07:58.988 7410.609 - 7461.022: 67.3880% ( 171) 00:07:58.988 7461.022 - 7511.434: 68.0896% ( 119) 00:07:58.988 7511.434 - 7561.846: 68.8856% ( 135) 00:07:58.988 7561.846 - 7612.258: 69.5519% ( 113) 00:07:58.988 7612.258 - 7662.671: 70.3715% ( 139) 00:07:58.988 7662.671 - 7713.083: 71.0259% ( 111) 00:07:58.988 7713.083 - 7763.495: 71.5212% ( 84) 00:07:58.988 7763.495 - 7813.908: 72.0873% ( 96) 00:07:58.988 7813.908 - 7864.320: 72.9009% ( 138) 00:07:58.988 7864.320 - 7914.732: 73.6321% ( 124) 00:07:58.988 7914.732 - 7965.145: 74.3691% ( 125) 00:07:58.988 7965.145 - 8015.557: 75.1592% ( 134) 00:07:58.988 8015.557 - 8065.969: 76.1675% ( 171) 00:07:58.988 8065.969 - 8116.382: 77.1108% ( 160) 00:07:58.988 8116.382 - 8166.794: 78.1604% ( 178) 00:07:58.988 8166.794 - 8217.206: 79.3396% ( 200) 00:07:58.988 8217.206 - 8267.618: 80.2712% ( 158) 00:07:58.989 8267.618 - 8318.031: 81.1733% ( 153) 00:07:58.989 8318.031 - 8368.443: 81.8337% ( 112) 00:07:58.989 8368.443 - 8418.855: 82.6356% ( 136) 00:07:58.989 8418.855 - 8469.268: 83.3255% ( 117) 00:07:58.989 8469.268 - 8519.680: 83.9976% ( 114) 00:07:58.989 8519.680 - 8570.092: 84.5814% ( 99) 00:07:58.989 8570.092 - 8620.505: 85.0413% ( 78) 00:07:58.989 8620.505 - 8670.917: 85.4717% ( 73) 00:07:58.989 8670.917 - 8721.329: 85.7017% ( 39) 00:07:58.989 8721.329 - 8771.742: 85.9434% ( 41) 00:07:58.989 8771.742 - 8822.154: 86.1085% ( 28) 00:07:58.989 8822.154 - 8872.566: 86.2677% ( 27) 00:07:58.989 8872.566 - 8922.978: 86.4033% ( 23) 00:07:58.989 8922.978 - 8973.391: 86.6686% ( 45) 00:07:58.989 8973.391 - 9023.803: 86.9340% ( 45) 00:07:58.989 9023.803 - 9074.215: 87.4351% ( 85) 00:07:58.989 9074.215 - 9124.628: 87.8656% ( 73) 00:07:58.989 9124.628 - 9175.040: 88.4139% ( 93) 00:07:58.989 9175.040 - 9225.452: 88.8974% ( 82) 00:07:58.989 9225.452 - 9275.865: 89.8349% ( 159) 00:07:58.989 9275.865 - 9326.277: 90.2712% ( 74) 00:07:58.989 9326.277 - 9376.689: 90.6368% ( 62) 00:07:58.989 9376.689 - 9427.102: 90.9965% ( 61) 00:07:58.989 9427.102 - 9477.514: 91.3267% ( 56) 00:07:58.989 9477.514 - 9527.926: 91.6863% ( 61) 00:07:58.989 9527.926 - 9578.338: 92.0283% ( 58) 00:07:58.989 9578.338 - 9628.751: 92.2818% ( 43) 00:07:58.989 9628.751 - 9679.163: 92.4823% ( 34) 00:07:58.989 9679.163 - 9729.575: 92.7417% ( 44) 00:07:58.989 9729.575 - 9779.988: 93.1840% ( 75) 00:07:58.989 9779.988 - 9830.400: 93.4670% ( 48) 00:07:58.989 9830.400 - 9880.812: 93.6675% ( 34) 00:07:58.989 9880.812 - 9931.225: 93.8208% ( 26) 00:07:58.989 9931.225 - 9981.637: 93.9976% ( 30) 00:07:58.989 9981.637 - 10032.049: 94.3573% ( 61) 00:07:58.989 10032.049 - 10082.462: 94.5106% ( 26) 00:07:58.989 10082.462 - 10132.874: 94.6639% ( 26) 00:07:58.989 10132.874 - 10183.286: 94.8172% ( 26) 00:07:58.989 10183.286 - 10233.698: 94.9351% ( 20) 00:07:58.989 10233.698 - 10284.111: 95.0236% ( 15) 00:07:58.989 10284.111 - 10334.523: 95.1179% ( 16) 00:07:58.989 10334.523 - 10384.935: 95.2300% ( 19) 00:07:58.989 10384.935 - 10435.348: 95.2653% ( 6) 00:07:58.989 10435.348 - 10485.760: 95.3007% ( 6) 00:07:58.989 10485.760 - 10536.172: 95.3420% ( 7) 00:07:58.989 10536.172 - 10586.585: 95.3715% ( 5) 00:07:58.989 10586.585 - 10636.997: 95.4127% ( 7) 00:07:58.989 10636.997 - 10687.409: 95.4481% ( 6) 00:07:58.989 10687.409 - 10737.822: 95.4835% ( 6) 00:07:58.989 10737.822 - 10788.234: 95.5189% ( 6) 00:07:58.989 10788.234 - 10838.646: 95.5542% ( 6) 00:07:58.989 10838.646 - 10889.058: 95.5896% ( 6) 00:07:58.989 10889.058 - 10939.471: 95.6014% ( 2) 00:07:58.989 10939.471 - 10989.883: 95.6250% ( 4) 00:07:58.989 10989.883 - 11040.295: 95.6486% ( 4) 00:07:58.989 11040.295 - 11090.708: 95.6722% ( 4) 00:07:58.989 11090.708 - 11141.120: 95.6899% ( 3) 00:07:58.989 11141.120 - 11191.532: 95.7075% ( 3) 00:07:58.989 11191.532 - 11241.945: 95.7252% ( 3) 00:07:58.989 11241.945 - 11292.357: 95.7429% ( 3) 00:07:58.989 11292.357 - 11342.769: 95.7606% ( 3) 00:07:58.989 11342.769 - 11393.182: 95.7842% ( 4) 00:07:58.989 11393.182 - 11443.594: 95.8432% ( 10) 00:07:58.989 11443.594 - 11494.006: 95.9198% ( 13) 00:07:58.989 11494.006 - 11544.418: 95.9847% ( 11) 00:07:58.989 11544.418 - 11594.831: 96.1085% ( 21) 00:07:58.989 11594.831 - 11645.243: 96.1969% ( 15) 00:07:58.989 11645.243 - 11695.655: 96.3443% ( 25) 00:07:58.989 11695.655 - 11746.068: 96.4976% ( 26) 00:07:58.989 11746.068 - 11796.480: 96.5625% ( 11) 00:07:58.989 11796.480 - 11846.892: 96.6274% ( 11) 00:07:58.989 11846.892 - 11897.305: 96.7512% ( 21) 00:07:58.989 11897.305 - 11947.717: 96.8809% ( 22) 00:07:58.989 11947.717 - 11998.129: 97.0873% ( 35) 00:07:58.989 11998.129 - 12048.542: 97.5000% ( 70) 00:07:58.989 12048.542 - 12098.954: 97.6946% ( 33) 00:07:58.989 12098.954 - 12149.366: 97.9304% ( 40) 00:07:58.989 12149.366 - 12199.778: 98.0425% ( 19) 00:07:58.989 12199.778 - 12250.191: 98.1073% ( 11) 00:07:58.989 12250.191 - 12300.603: 98.1604% ( 9) 00:07:58.989 12300.603 - 12351.015: 98.2193% ( 10) 00:07:58.989 12351.015 - 12401.428: 98.2724% ( 9) 00:07:58.989 12401.428 - 12451.840: 98.3432% ( 12) 00:07:58.989 12451.840 - 12502.252: 98.4257% ( 14) 00:07:58.989 12502.252 - 12552.665: 98.4906% ( 11) 00:07:58.989 12552.665 - 12603.077: 98.7264% ( 40) 00:07:58.989 12603.077 - 12653.489: 98.7736% ( 8) 00:07:58.989 12653.489 - 12703.902: 98.8149% ( 7) 00:07:58.989 12703.902 - 12754.314: 98.8620% ( 8) 00:07:58.989 12754.314 - 12804.726: 98.8679% ( 1) 00:07:58.989 13107.200 - 13208.025: 98.8974% ( 5) 00:07:58.989 13208.025 - 13308.849: 98.9328% ( 6) 00:07:58.989 13308.849 - 13409.674: 98.9623% ( 5) 00:07:58.989 13409.674 - 13510.498: 98.9858% ( 4) 00:07:58.989 13510.498 - 13611.323: 99.0153% ( 5) 00:07:58.989 13611.323 - 13712.148: 99.0389% ( 4) 00:07:58.989 13712.148 - 13812.972: 99.0920% ( 9) 00:07:58.989 13812.972 - 13913.797: 99.1922% ( 17) 00:07:58.989 13913.797 - 14014.622: 99.2217% ( 5) 00:07:58.989 14014.622 - 14115.446: 99.2453% ( 4) 00:07:58.989 17241.009 - 17341.834: 99.2512% ( 1) 00:07:58.989 17341.834 - 17442.658: 99.2925% ( 7) 00:07:58.989 17442.658 - 17543.483: 99.3514% ( 10) 00:07:58.989 17543.483 - 17644.308: 99.4104% ( 10) 00:07:58.989 17644.308 - 17745.132: 99.4634% ( 9) 00:07:58.989 17745.132 - 17845.957: 99.4929% ( 5) 00:07:58.989 17845.957 - 17946.782: 99.5224% ( 5) 00:07:58.989 17946.782 - 18047.606: 99.5519% ( 5) 00:07:58.989 18047.606 - 18148.431: 99.5814% ( 5) 00:07:58.989 18148.431 - 18249.255: 99.6167% ( 6) 00:07:58.989 18249.255 - 18350.080: 99.6226% ( 1) 00:07:58.989 22887.188 - 22988.012: 99.6285% ( 1) 00:07:58.989 22988.012 - 23088.837: 99.6521% ( 4) 00:07:58.989 23088.837 - 23189.662: 99.6816% ( 5) 00:07:58.989 23189.662 - 23290.486: 99.7111% ( 5) 00:07:58.989 23290.486 - 23391.311: 99.7524% ( 7) 00:07:58.989 23391.311 - 23492.135: 99.7877% ( 6) 00:07:58.989 23492.135 - 23592.960: 99.8113% ( 4) 00:07:58.989 23592.960 - 23693.785: 99.8349% ( 4) 00:07:58.989 23693.785 - 23794.609: 99.8644% ( 5) 00:07:58.989 23794.609 - 23895.434: 99.9175% ( 9) 00:07:58.989 23895.434 - 23996.258: 99.9587% ( 7) 00:07:58.989 23996.258 - 24097.083: 99.9882% ( 5) 00:07:58.989 24097.083 - 24197.908: 100.0000% ( 2) 00:07:58.989 00:07:58.989 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.989 ============================================================================== 00:07:58.989 Range in us Cumulative IO count 00:07:58.989 4738.757 - 4763.963: 0.0118% ( 2) 00:07:58.989 4763.963 - 4789.169: 0.0354% ( 4) 00:07:58.989 4789.169 - 4814.375: 0.0413% ( 1) 00:07:58.989 4814.375 - 4839.582: 0.1356% ( 16) 00:07:58.989 4839.582 - 4864.788: 0.1651% ( 5) 00:07:58.989 4864.788 - 4889.994: 0.1887% ( 4) 00:07:58.989 4889.994 - 4915.200: 0.2005% ( 2) 00:07:58.989 4940.406 - 4965.612: 0.2064% ( 1) 00:07:58.989 4965.612 - 4990.818: 0.2182% ( 2) 00:07:58.989 4990.818 - 5016.025: 0.2358% ( 3) 00:07:58.989 5016.025 - 5041.231: 0.2417% ( 1) 00:07:58.989 5041.231 - 5066.437: 0.2535% ( 2) 00:07:58.989 5066.437 - 5091.643: 0.2653% ( 2) 00:07:58.989 5091.643 - 5116.849: 0.2771% ( 2) 00:07:58.989 5116.849 - 5142.055: 0.2830% ( 1) 00:07:58.989 5142.055 - 5167.262: 0.2889% ( 1) 00:07:58.989 5192.468 - 5217.674: 0.2948% ( 1) 00:07:58.989 5217.674 - 5242.880: 0.3184% ( 4) 00:07:58.989 5242.880 - 5268.086: 0.3243% ( 1) 00:07:58.989 5268.086 - 5293.292: 0.3361% ( 2) 00:07:58.989 5293.292 - 5318.498: 0.3538% ( 3) 00:07:58.989 5318.498 - 5343.705: 0.3656% ( 2) 00:07:58.989 5343.705 - 5368.911: 0.3774% ( 2) 00:07:58.989 5646.178 - 5671.385: 0.3892% ( 2) 00:07:58.989 5671.385 - 5696.591: 0.3950% ( 1) 00:07:58.989 5696.591 - 5721.797: 0.4068% ( 2) 00:07:58.989 5721.797 - 5747.003: 0.4186% ( 2) 00:07:58.989 5747.003 - 5772.209: 0.4304% ( 2) 00:07:58.989 5772.209 - 5797.415: 0.4481% ( 3) 00:07:58.989 5797.415 - 5822.622: 0.4717% ( 4) 00:07:58.989 5822.622 - 5847.828: 0.5189% ( 8) 00:07:58.989 5847.828 - 5873.034: 0.5366% ( 3) 00:07:58.989 5873.034 - 5898.240: 0.5719% ( 6) 00:07:58.989 5898.240 - 5923.446: 0.6663% ( 16) 00:07:58.989 5923.446 - 5948.652: 0.7429% ( 13) 00:07:58.989 5948.652 - 5973.858: 0.8608% ( 20) 00:07:58.989 5973.858 - 5999.065: 0.9198% ( 10) 00:07:58.989 5999.065 - 6024.271: 1.0436% ( 21) 00:07:58.989 6024.271 - 6049.477: 1.2264% ( 31) 00:07:58.989 6049.477 - 6074.683: 1.4210% ( 33) 00:07:58.989 6074.683 - 6099.889: 1.6156% ( 33) 00:07:58.989 6099.889 - 6125.095: 1.8986% ( 48) 00:07:58.989 6125.095 - 6150.302: 2.2406% ( 58) 00:07:58.989 6150.302 - 6175.508: 2.6592% ( 71) 00:07:58.989 6175.508 - 6200.714: 3.1132% ( 77) 00:07:58.989 6200.714 - 6225.920: 3.6675% ( 94) 00:07:58.989 6225.920 - 6251.126: 4.3278% ( 112) 00:07:58.989 6251.126 - 6276.332: 4.9175% ( 100) 00:07:58.989 6276.332 - 6301.538: 5.9021% ( 167) 00:07:58.989 6301.538 - 6326.745: 7.1344% ( 209) 00:07:58.989 6326.745 - 6351.951: 8.3726% ( 210) 00:07:58.989 6351.951 - 6377.157: 9.5283% ( 196) 00:07:58.989 6377.157 - 6402.363: 11.1851% ( 281) 00:07:58.989 6402.363 - 6427.569: 12.8420% ( 281) 00:07:58.990 6427.569 - 6452.775: 14.6285% ( 303) 00:07:58.990 6452.775 - 6503.188: 17.8656% ( 549) 00:07:58.990 6503.188 - 6553.600: 21.4682% ( 611) 00:07:58.990 6553.600 - 6604.012: 24.9646% ( 593) 00:07:58.990 6604.012 - 6654.425: 29.0920% ( 700) 00:07:58.990 6654.425 - 6704.837: 33.4670% ( 742) 00:07:58.990 6704.837 - 6755.249: 37.6297% ( 706) 00:07:58.990 6755.249 - 6805.662: 41.1144% ( 591) 00:07:58.990 6805.662 - 6856.074: 44.3219% ( 544) 00:07:58.990 6856.074 - 6906.486: 47.5943% ( 555) 00:07:58.990 6906.486 - 6956.898: 50.3479% ( 467) 00:07:58.990 6956.898 - 7007.311: 52.9363% ( 439) 00:07:58.990 7007.311 - 7057.723: 55.3125% ( 403) 00:07:58.990 7057.723 - 7108.135: 57.4941% ( 370) 00:07:58.990 7108.135 - 7158.548: 59.3573% ( 316) 00:07:58.990 7158.548 - 7208.960: 61.0142% ( 281) 00:07:58.990 7208.960 - 7259.372: 62.6356% ( 275) 00:07:58.990 7259.372 - 7309.785: 64.0507% ( 240) 00:07:58.990 7309.785 - 7360.197: 65.1356% ( 184) 00:07:58.990 7360.197 - 7410.609: 66.3915% ( 213) 00:07:58.990 7410.609 - 7461.022: 67.3467% ( 162) 00:07:58.990 7461.022 - 7511.434: 68.2606% ( 155) 00:07:58.990 7511.434 - 7561.846: 69.2512% ( 168) 00:07:58.990 7561.846 - 7612.258: 70.0354% ( 133) 00:07:58.990 7612.258 - 7662.671: 70.7724% ( 125) 00:07:58.990 7662.671 - 7713.083: 71.4092% ( 108) 00:07:58.990 7713.083 - 7763.495: 72.1462% ( 125) 00:07:58.990 7763.495 - 7813.908: 72.9245% ( 132) 00:07:58.990 7813.908 - 7864.320: 73.6439% ( 122) 00:07:58.990 7864.320 - 7914.732: 74.5755% ( 158) 00:07:58.990 7914.732 - 7965.145: 75.3479% ( 131) 00:07:58.990 7965.145 - 8015.557: 76.2559% ( 154) 00:07:58.990 8015.557 - 8065.969: 77.0696% ( 138) 00:07:58.990 8065.969 - 8116.382: 77.8420% ( 131) 00:07:58.990 8116.382 - 8166.794: 78.5790% ( 125) 00:07:58.990 8166.794 - 8217.206: 79.1450% ( 96) 00:07:58.990 8217.206 - 8267.618: 79.9233% ( 132) 00:07:58.990 8267.618 - 8318.031: 80.6545% ( 124) 00:07:58.990 8318.031 - 8368.443: 81.2736% ( 105) 00:07:58.990 8368.443 - 8418.855: 81.7866% ( 87) 00:07:58.990 8418.855 - 8469.268: 82.4410% ( 111) 00:07:58.990 8469.268 - 8519.680: 83.0012% ( 95) 00:07:58.990 8519.680 - 8570.092: 83.4021% ( 68) 00:07:58.990 8570.092 - 8620.505: 83.9682% ( 96) 00:07:58.990 8620.505 - 8670.917: 84.5637% ( 101) 00:07:58.990 8670.917 - 8721.329: 85.4835% ( 156) 00:07:58.990 8721.329 - 8771.742: 86.0377% ( 94) 00:07:58.990 8771.742 - 8822.154: 86.5448% ( 86) 00:07:58.990 8822.154 - 8872.566: 86.9870% ( 75) 00:07:58.990 8872.566 - 8922.978: 87.4705% ( 82) 00:07:58.990 8922.978 - 8973.391: 87.9304% ( 78) 00:07:58.990 8973.391 - 9023.803: 88.4021% ( 80) 00:07:58.990 9023.803 - 9074.215: 88.6969% ( 50) 00:07:58.990 9074.215 - 9124.628: 88.9741% ( 47) 00:07:58.990 9124.628 - 9175.040: 89.2748% ( 51) 00:07:58.990 9175.040 - 9225.452: 89.5932% ( 54) 00:07:58.990 9225.452 - 9275.865: 89.9705% ( 64) 00:07:58.990 9275.865 - 9326.277: 90.2182% ( 42) 00:07:58.990 9326.277 - 9376.689: 90.4658% ( 42) 00:07:58.990 9376.689 - 9427.102: 90.7488% ( 48) 00:07:58.990 9427.102 - 9477.514: 91.1557% ( 69) 00:07:58.990 9477.514 - 9527.926: 91.4917% ( 57) 00:07:58.990 9527.926 - 9578.338: 91.8396% ( 59) 00:07:58.990 9578.338 - 9628.751: 92.1050% ( 45) 00:07:58.990 9628.751 - 9679.163: 92.3939% ( 49) 00:07:58.990 9679.163 - 9729.575: 92.6120% ( 37) 00:07:58.990 9729.575 - 9779.988: 92.8420% ( 39) 00:07:58.990 9779.988 - 9830.400: 93.1309% ( 49) 00:07:58.990 9830.400 - 9880.812: 93.3726% ( 41) 00:07:58.990 9880.812 - 9931.225: 93.5554% ( 31) 00:07:58.990 9931.225 - 9981.637: 93.7323% ( 30) 00:07:58.990 9981.637 - 10032.049: 94.0035% ( 46) 00:07:58.990 10032.049 - 10082.462: 94.1509% ( 25) 00:07:58.990 10082.462 - 10132.874: 94.2748% ( 21) 00:07:58.990 10132.874 - 10183.286: 94.4222% ( 25) 00:07:58.990 10183.286 - 10233.698: 94.5283% ( 18) 00:07:58.990 10233.698 - 10284.111: 94.6344% ( 18) 00:07:58.990 10284.111 - 10334.523: 94.7465% ( 19) 00:07:58.990 10334.523 - 10384.935: 94.9469% ( 34) 00:07:58.990 10384.935 - 10435.348: 95.0295% ( 14) 00:07:58.990 10435.348 - 10485.760: 95.1474% ( 20) 00:07:58.990 10485.760 - 10536.172: 95.2417% ( 16) 00:07:58.990 10536.172 - 10586.585: 95.3302% ( 15) 00:07:58.990 10586.585 - 10636.997: 95.4127% ( 14) 00:07:58.990 10636.997 - 10687.409: 95.4835% ( 12) 00:07:58.990 10687.409 - 10737.822: 95.5896% ( 18) 00:07:58.990 10737.822 - 10788.234: 95.6604% ( 12) 00:07:58.990 10788.234 - 10838.646: 95.7370% ( 13) 00:07:58.990 10838.646 - 10889.058: 95.7842% ( 8) 00:07:58.990 10889.058 - 10939.471: 95.8550% ( 12) 00:07:58.990 10939.471 - 10989.883: 95.9080% ( 9) 00:07:58.990 10989.883 - 11040.295: 95.9375% ( 5) 00:07:58.990 11040.295 - 11090.708: 95.9611% ( 4) 00:07:58.990 11090.708 - 11141.120: 95.9847% ( 4) 00:07:58.990 11141.120 - 11191.532: 96.0142% ( 5) 00:07:58.990 11191.532 - 11241.945: 96.0436% ( 5) 00:07:58.990 11241.945 - 11292.357: 96.0554% ( 2) 00:07:58.990 11292.357 - 11342.769: 96.0790% ( 4) 00:07:58.990 11342.769 - 11393.182: 96.1085% ( 5) 00:07:58.990 11393.182 - 11443.594: 96.1910% ( 14) 00:07:58.990 11443.594 - 11494.006: 96.3090% ( 20) 00:07:58.990 11494.006 - 11544.418: 96.4328% ( 21) 00:07:58.990 11544.418 - 11594.831: 96.5094% ( 13) 00:07:58.990 11594.831 - 11645.243: 96.6333% ( 21) 00:07:58.990 11645.243 - 11695.655: 96.7394% ( 18) 00:07:58.990 11695.655 - 11746.068: 96.8396% ( 17) 00:07:58.990 11746.068 - 11796.480: 96.9634% ( 21) 00:07:58.990 11796.480 - 11846.892: 97.0873% ( 21) 00:07:58.990 11846.892 - 11897.305: 97.1875% ( 17) 00:07:58.990 11897.305 - 11947.717: 97.3290% ( 24) 00:07:58.990 11947.717 - 11998.129: 97.3880% ( 10) 00:07:58.990 11998.129 - 12048.542: 97.5649% ( 30) 00:07:58.990 12048.542 - 12098.954: 97.7889% ( 38) 00:07:58.990 12098.954 - 12149.366: 97.8420% ( 9) 00:07:58.990 12149.366 - 12199.778: 97.8950% ( 9) 00:07:58.990 12199.778 - 12250.191: 97.9776% ( 14) 00:07:58.990 12250.191 - 12300.603: 98.0542% ( 13) 00:07:58.990 12300.603 - 12351.015: 98.1545% ( 17) 00:07:58.990 12351.015 - 12401.428: 98.2311% ( 13) 00:07:58.990 12401.428 - 12451.840: 98.2724% ( 7) 00:07:58.990 12451.840 - 12502.252: 98.3196% ( 8) 00:07:58.990 12502.252 - 12552.665: 98.3432% ( 4) 00:07:58.990 12552.665 - 12603.077: 98.3844% ( 7) 00:07:58.990 12603.077 - 12653.489: 98.4493% ( 11) 00:07:58.990 12653.489 - 12703.902: 98.5495% ( 17) 00:07:58.990 12703.902 - 12754.314: 98.6910% ( 24) 00:07:58.990 12754.314 - 12804.726: 98.7264% ( 6) 00:07:58.990 12804.726 - 12855.138: 98.7500% ( 4) 00:07:58.990 12855.138 - 12905.551: 98.7795% ( 5) 00:07:58.990 12905.551 - 13006.375: 98.8325% ( 9) 00:07:58.990 13006.375 - 13107.200: 98.8738% ( 7) 00:07:58.990 13107.200 - 13208.025: 98.9033% ( 5) 00:07:58.990 13208.025 - 13308.849: 98.9151% ( 2) 00:07:58.990 13308.849 - 13409.674: 98.9210% ( 1) 00:07:58.990 13409.674 - 13510.498: 98.9564% ( 6) 00:07:58.990 13510.498 - 13611.323: 98.9741% ( 3) 00:07:58.990 13611.323 - 13712.148: 99.0566% ( 14) 00:07:58.990 13712.148 - 13812.972: 99.0861% ( 5) 00:07:58.990 13812.972 - 13913.797: 99.0979% ( 2) 00:07:58.990 14014.622 - 14115.446: 99.1097% ( 2) 00:07:58.990 14115.446 - 14216.271: 99.1686% ( 10) 00:07:58.990 14518.745 - 14619.569: 99.1981% ( 5) 00:07:58.990 14619.569 - 14720.394: 99.2099% ( 2) 00:07:58.990 14720.394 - 14821.218: 99.2276% ( 3) 00:07:58.990 14821.218 - 14922.043: 99.2453% ( 3) 00:07:58.990 16333.588 - 16434.412: 99.2689% ( 4) 00:07:58.990 16434.412 - 16535.237: 99.3042% ( 6) 00:07:58.990 16535.237 - 16636.062: 99.3219% ( 3) 00:07:58.990 16636.062 - 16736.886: 99.3455% ( 4) 00:07:58.990 16736.886 - 16837.711: 99.3632% ( 3) 00:07:58.990 16837.711 - 16938.535: 99.3868% ( 4) 00:07:58.990 16938.535 - 17039.360: 99.4045% ( 3) 00:07:58.990 17039.360 - 17140.185: 99.4163% ( 2) 00:07:58.990 17140.185 - 17241.009: 99.4222% ( 1) 00:07:58.990 17241.009 - 17341.834: 99.4340% ( 2) 00:07:58.990 17341.834 - 17442.658: 99.4458% ( 2) 00:07:58.990 17442.658 - 17543.483: 99.4575% ( 2) 00:07:58.990 17543.483 - 17644.308: 99.4634% ( 1) 00:07:58.990 17644.308 - 17745.132: 99.4752% ( 2) 00:07:58.990 17745.132 - 17845.957: 99.4811% ( 1) 00:07:58.990 17946.782 - 18047.606: 99.4988% ( 3) 00:07:58.990 18047.606 - 18148.431: 99.5224% ( 4) 00:07:58.990 18148.431 - 18249.255: 99.5578% ( 6) 00:07:58.990 18249.255 - 18350.080: 99.5932% ( 6) 00:07:58.990 18350.080 - 18450.905: 99.6226% ( 5) 00:07:58.990 22887.188 - 22988.012: 99.6344% ( 2) 00:07:58.990 22988.012 - 23088.837: 99.6403% ( 1) 00:07:58.990 23088.837 - 23189.662: 99.6521% ( 2) 00:07:58.990 23189.662 - 23290.486: 99.6580% ( 1) 00:07:58.990 23290.486 - 23391.311: 99.6698% ( 2) 00:07:58.990 23391.311 - 23492.135: 99.6816% ( 2) 00:07:58.990 23492.135 - 23592.960: 99.7170% ( 6) 00:07:58.990 23592.960 - 23693.785: 99.7700% ( 9) 00:07:58.990 23693.785 - 23794.609: 99.8113% ( 7) 00:07:58.990 23794.609 - 23895.434: 99.8526% ( 7) 00:07:58.990 23895.434 - 23996.258: 99.8939% ( 7) 00:07:58.991 23996.258 - 24097.083: 99.9351% ( 7) 00:07:58.991 24097.083 - 24197.908: 99.9646% ( 5) 00:07:58.991 24500.382 - 24601.206: 99.9764% ( 2) 00:07:58.991 24601.206 - 24702.031: 99.9882% ( 2) 00:07:58.991 24702.031 - 24802.855: 100.0000% ( 2) 00:07:58.991 00:07:58.991 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.991 ============================================================================== 00:07:58.991 Range in us Cumulative IO count 00:07:58.991 4486.695 - 4511.902: 0.0059% ( 1) 00:07:58.991 4511.902 - 4537.108: 0.0118% ( 1) 00:07:58.991 4537.108 - 4562.314: 0.0354% ( 4) 00:07:58.991 4562.314 - 4587.520: 0.0531% ( 3) 00:07:58.991 4587.520 - 4612.726: 0.0884% ( 6) 00:07:58.991 4612.726 - 4637.932: 0.1238% ( 6) 00:07:58.991 4637.932 - 4663.138: 0.1887% ( 11) 00:07:58.991 4663.138 - 4688.345: 0.2241% ( 6) 00:07:58.991 4688.345 - 4713.551: 0.2417% ( 3) 00:07:58.991 4713.551 - 4738.757: 0.2535% ( 2) 00:07:58.991 4738.757 - 4763.963: 0.2653% ( 2) 00:07:58.991 4763.963 - 4789.169: 0.2712% ( 1) 00:07:58.991 4789.169 - 4814.375: 0.2830% ( 2) 00:07:58.991 4814.375 - 4839.582: 0.2948% ( 2) 00:07:58.991 4839.582 - 4864.788: 0.3066% ( 2) 00:07:58.991 4864.788 - 4889.994: 0.3184% ( 2) 00:07:58.991 4889.994 - 4915.200: 0.3302% ( 2) 00:07:58.991 4915.200 - 4940.406: 0.3420% ( 2) 00:07:58.991 4940.406 - 4965.612: 0.3538% ( 2) 00:07:58.991 4965.612 - 4990.818: 0.3656% ( 2) 00:07:58.991 4990.818 - 5016.025: 0.3774% ( 2) 00:07:58.991 5696.591 - 5721.797: 0.3833% ( 1) 00:07:58.991 5721.797 - 5747.003: 0.3892% ( 1) 00:07:58.991 5772.209 - 5797.415: 0.3950% ( 1) 00:07:58.991 5847.828 - 5873.034: 0.4009% ( 1) 00:07:58.991 5898.240 - 5923.446: 0.4068% ( 1) 00:07:58.991 5948.652 - 5973.858: 0.4127% ( 1) 00:07:58.991 5973.858 - 5999.065: 0.4245% ( 2) 00:07:58.991 5999.065 - 6024.271: 0.4422% ( 3) 00:07:58.991 6024.271 - 6049.477: 0.4953% ( 9) 00:07:58.991 6049.477 - 6074.683: 0.5366% ( 7) 00:07:58.991 6074.683 - 6099.889: 0.6427% ( 18) 00:07:58.991 6099.889 - 6125.095: 0.7488% ( 18) 00:07:58.991 6125.095 - 6150.302: 0.8550% ( 18) 00:07:58.991 6150.302 - 6175.508: 1.0259% ( 29) 00:07:58.991 6175.508 - 6200.714: 1.3325% ( 52) 00:07:58.991 6200.714 - 6225.920: 1.7158% ( 65) 00:07:58.991 6225.920 - 6251.126: 2.1226% ( 69) 00:07:58.991 6251.126 - 6276.332: 2.5531% ( 73) 00:07:58.991 6276.332 - 6301.538: 3.1545% ( 102) 00:07:58.991 6301.538 - 6326.745: 3.9623% ( 137) 00:07:58.991 6326.745 - 6351.951: 4.7995% ( 142) 00:07:58.991 6351.951 - 6377.157: 5.8608% ( 180) 00:07:58.991 6377.157 - 6402.363: 7.4469% ( 269) 00:07:58.991 6402.363 - 6427.569: 8.6675% ( 207) 00:07:58.991 6427.569 - 6452.775: 10.3125% ( 279) 00:07:58.991 6452.775 - 6503.188: 14.2158% ( 662) 00:07:58.991 6503.188 - 6553.600: 18.9446% ( 802) 00:07:58.991 6553.600 - 6604.012: 23.9858% ( 855) 00:07:58.991 6604.012 - 6654.425: 28.5849% ( 780) 00:07:58.991 6654.425 - 6704.837: 33.4198% ( 820) 00:07:58.991 6704.837 - 6755.249: 37.8950% ( 759) 00:07:58.991 6755.249 - 6805.662: 43.1191% ( 886) 00:07:58.991 6805.662 - 6856.074: 47.1462% ( 683) 00:07:58.991 6856.074 - 6906.486: 50.0943% ( 500) 00:07:58.991 6906.486 - 6956.898: 53.5495% ( 586) 00:07:58.991 6956.898 - 7007.311: 55.5601% ( 341) 00:07:58.991 7007.311 - 7057.723: 57.3231% ( 299) 00:07:58.991 7057.723 - 7108.135: 59.0566% ( 294) 00:07:58.991 7108.135 - 7158.548: 60.5071% ( 246) 00:07:58.991 7158.548 - 7208.960: 61.7099% ( 204) 00:07:58.991 7208.960 - 7259.372: 63.3785% ( 283) 00:07:58.991 7259.372 - 7309.785: 64.4340% ( 179) 00:07:58.991 7309.785 - 7360.197: 65.3597% ( 157) 00:07:58.991 7360.197 - 7410.609: 66.1085% ( 127) 00:07:58.991 7410.609 - 7461.022: 66.8455% ( 125) 00:07:58.991 7461.022 - 7511.434: 67.9540% ( 188) 00:07:58.991 7511.434 - 7561.846: 68.7382% ( 133) 00:07:58.991 7561.846 - 7612.258: 69.4988% ( 129) 00:07:58.991 7612.258 - 7662.671: 70.1356% ( 108) 00:07:58.991 7662.671 - 7713.083: 70.8255% ( 117) 00:07:58.991 7713.083 - 7763.495: 71.3502% ( 89) 00:07:58.991 7763.495 - 7813.908: 71.8573% ( 86) 00:07:58.991 7813.908 - 7864.320: 72.5059% ( 110) 00:07:58.991 7864.320 - 7914.732: 73.0837% ( 98) 00:07:58.991 7914.732 - 7965.145: 73.8679% ( 133) 00:07:58.991 7965.145 - 8015.557: 74.7347% ( 147) 00:07:58.991 8015.557 - 8065.969: 75.7311% ( 169) 00:07:58.991 8065.969 - 8116.382: 76.6922% ( 163) 00:07:58.991 8116.382 - 8166.794: 77.8361% ( 194) 00:07:58.991 8166.794 - 8217.206: 78.9092% ( 182) 00:07:58.991 8217.206 - 8267.618: 80.1002% ( 202) 00:07:58.991 8267.618 - 8318.031: 81.3679% ( 215) 00:07:58.991 8318.031 - 8368.443: 82.4941% ( 191) 00:07:58.991 8368.443 - 8418.855: 83.3785% ( 150) 00:07:58.991 8418.855 - 8469.268: 84.2453% ( 147) 00:07:58.991 8469.268 - 8519.680: 84.8231% ( 98) 00:07:58.991 8519.680 - 8570.092: 85.4363% ( 104) 00:07:58.991 8570.092 - 8620.505: 86.0495% ( 104) 00:07:58.991 8620.505 - 8670.917: 86.7748% ( 123) 00:07:58.991 8670.917 - 8721.329: 87.2700% ( 84) 00:07:58.991 8721.329 - 8771.742: 87.8302% ( 95) 00:07:58.991 8771.742 - 8822.154: 88.2783% ( 76) 00:07:58.991 8822.154 - 8872.566: 88.7264% ( 76) 00:07:58.991 8872.566 - 8922.978: 89.0979% ( 63) 00:07:58.991 8922.978 - 8973.391: 89.4340% ( 57) 00:07:58.991 8973.391 - 9023.803: 89.8113% ( 64) 00:07:58.991 9023.803 - 9074.215: 90.0531% ( 41) 00:07:58.991 9074.215 - 9124.628: 90.3184% ( 45) 00:07:58.991 9124.628 - 9175.040: 90.5601% ( 41) 00:07:58.991 9175.040 - 9225.452: 90.7901% ( 39) 00:07:58.991 9225.452 - 9275.865: 91.0200% ( 39) 00:07:58.991 9275.865 - 9326.277: 91.3856% ( 62) 00:07:58.991 9326.277 - 9376.689: 91.5271% ( 24) 00:07:58.991 9376.689 - 9427.102: 91.6686% ( 24) 00:07:58.991 9427.102 - 9477.514: 91.7925% ( 21) 00:07:58.991 9477.514 - 9527.926: 91.9399% ( 25) 00:07:58.991 9527.926 - 9578.338: 92.2052% ( 45) 00:07:58.991 9578.338 - 9628.751: 92.4175% ( 36) 00:07:58.991 9628.751 - 9679.163: 92.5767% ( 27) 00:07:58.991 9679.163 - 9729.575: 92.6946% ( 20) 00:07:58.991 9729.575 - 9779.988: 92.7535% ( 10) 00:07:58.991 9779.988 - 9830.400: 92.7712% ( 3) 00:07:58.991 9830.400 - 9880.812: 92.8597% ( 15) 00:07:58.991 9880.812 - 9931.225: 92.9540% ( 16) 00:07:58.991 9931.225 - 9981.637: 93.0425% ( 15) 00:07:58.991 9981.637 - 10032.049: 93.1132% ( 12) 00:07:58.991 10032.049 - 10082.462: 93.1840% ( 12) 00:07:58.991 10082.462 - 10132.874: 93.2842% ( 17) 00:07:58.991 10132.874 - 10183.286: 93.4965% ( 36) 00:07:58.991 10183.286 - 10233.698: 93.6557% ( 27) 00:07:58.991 10233.698 - 10284.111: 93.8267% ( 29) 00:07:58.991 10284.111 - 10334.523: 93.9623% ( 23) 00:07:58.991 10334.523 - 10384.935: 94.1215% ( 27) 00:07:58.991 10384.935 - 10435.348: 94.3514% ( 39) 00:07:58.991 10435.348 - 10485.760: 94.5519% ( 34) 00:07:58.991 10485.760 - 10536.172: 94.6462% ( 16) 00:07:58.991 10536.172 - 10586.585: 94.7642% ( 20) 00:07:58.991 10586.585 - 10636.997: 94.9233% ( 27) 00:07:58.991 10636.997 - 10687.409: 95.0884% ( 28) 00:07:58.991 10687.409 - 10737.822: 95.1946% ( 18) 00:07:58.991 10737.822 - 10788.234: 95.2712% ( 13) 00:07:58.991 10788.234 - 10838.646: 95.3538% ( 14) 00:07:58.991 10838.646 - 10889.058: 95.5071% ( 26) 00:07:58.991 10889.058 - 10939.471: 95.6191% ( 19) 00:07:58.991 10939.471 - 10989.883: 95.8078% ( 32) 00:07:58.991 10989.883 - 11040.295: 96.0790% ( 46) 00:07:58.992 11040.295 - 11090.708: 96.2205% ( 24) 00:07:58.992 11090.708 - 11141.120: 96.3679% ( 25) 00:07:58.992 11141.120 - 11191.532: 96.4682% ( 17) 00:07:58.992 11191.532 - 11241.945: 96.6097% ( 24) 00:07:58.992 11241.945 - 11292.357: 96.6981% ( 15) 00:07:58.992 11292.357 - 11342.769: 96.7925% ( 16) 00:07:58.992 11342.769 - 11393.182: 96.8809% ( 15) 00:07:58.992 11393.182 - 11443.594: 97.0283% ( 25) 00:07:58.992 11443.594 - 11494.006: 97.1521% ( 21) 00:07:58.992 11494.006 - 11544.418: 97.2288% ( 13) 00:07:58.992 11544.418 - 11594.831: 97.3172% ( 15) 00:07:58.992 11594.831 - 11645.243: 97.4057% ( 15) 00:07:58.992 11645.243 - 11695.655: 97.4587% ( 9) 00:07:58.992 11695.655 - 11746.068: 97.5000% ( 7) 00:07:58.992 11746.068 - 11796.480: 97.5472% ( 8) 00:07:58.992 11796.480 - 11846.892: 97.5767% ( 5) 00:07:58.992 11846.892 - 11897.305: 97.6120% ( 6) 00:07:58.992 11897.305 - 11947.717: 97.6533% ( 7) 00:07:58.992 11947.717 - 11998.129: 97.6887% ( 6) 00:07:58.992 11998.129 - 12048.542: 97.7123% ( 4) 00:07:58.992 12048.542 - 12098.954: 97.7300% ( 3) 00:07:58.992 12098.954 - 12149.366: 97.7535% ( 4) 00:07:58.992 12149.366 - 12199.778: 97.8715% ( 20) 00:07:58.992 12199.778 - 12250.191: 98.0130% ( 24) 00:07:58.992 12250.191 - 12300.603: 98.0955% ( 14) 00:07:58.992 12300.603 - 12351.015: 98.1250% ( 5) 00:07:58.992 12351.015 - 12401.428: 98.1545% ( 5) 00:07:58.992 12401.428 - 12451.840: 98.1899% ( 6) 00:07:58.992 12451.840 - 12502.252: 98.2193% ( 5) 00:07:58.992 12502.252 - 12552.665: 98.2488% ( 5) 00:07:58.992 12552.665 - 12603.077: 98.2665% ( 3) 00:07:58.992 12603.077 - 12653.489: 98.2842% ( 3) 00:07:58.992 12653.489 - 12703.902: 98.3314% ( 8) 00:07:58.992 12703.902 - 12754.314: 98.3844% ( 9) 00:07:58.992 12754.314 - 12804.726: 98.4493% ( 11) 00:07:58.992 12804.726 - 12855.138: 98.6910% ( 41) 00:07:58.992 12855.138 - 12905.551: 98.7323% ( 7) 00:07:58.992 12905.551 - 13006.375: 98.8325% ( 17) 00:07:58.992 13006.375 - 13107.200: 98.8915% ( 10) 00:07:58.992 13107.200 - 13208.025: 98.9623% ( 12) 00:07:58.992 13208.025 - 13308.849: 99.0389% ( 13) 00:07:58.992 13308.849 - 13409.674: 99.0861% ( 8) 00:07:58.992 13409.674 - 13510.498: 99.0979% ( 2) 00:07:58.992 13510.498 - 13611.323: 99.1156% ( 3) 00:07:58.992 13611.323 - 13712.148: 99.1274% ( 2) 00:07:58.992 13712.148 - 13812.972: 99.1450% ( 3) 00:07:58.992 13812.972 - 13913.797: 99.1627% ( 3) 00:07:58.992 13913.797 - 14014.622: 99.1745% ( 2) 00:07:58.992 14014.622 - 14115.446: 99.1922% ( 3) 00:07:58.992 14115.446 - 14216.271: 99.2040% ( 2) 00:07:58.992 14216.271 - 14317.095: 99.2158% ( 2) 00:07:58.992 14317.095 - 14417.920: 99.2335% ( 3) 00:07:58.992 14417.920 - 14518.745: 99.2453% ( 2) 00:07:58.992 17039.360 - 17140.185: 99.2512% ( 1) 00:07:58.992 17341.834 - 17442.658: 99.2866% ( 6) 00:07:58.992 17442.658 - 17543.483: 99.3160% ( 5) 00:07:58.992 17543.483 - 17644.308: 99.3455% ( 5) 00:07:58.992 17644.308 - 17745.132: 99.4517% ( 18) 00:07:58.992 17745.132 - 17845.957: 99.5696% ( 20) 00:07:58.992 17845.957 - 17946.782: 99.6226% ( 9) 00:07:58.992 22786.363 - 22887.188: 99.6285% ( 1) 00:07:58.992 22887.188 - 22988.012: 99.6521% ( 4) 00:07:58.992 22988.012 - 23088.837: 99.6816% ( 5) 00:07:58.992 23088.837 - 23189.662: 99.7111% ( 5) 00:07:58.992 23189.662 - 23290.486: 99.8113% ( 17) 00:07:58.992 23290.486 - 23391.311: 99.8408% ( 5) 00:07:58.992 23391.311 - 23492.135: 99.8644% ( 4) 00:07:58.992 23492.135 - 23592.960: 99.8880% ( 4) 00:07:58.992 23592.960 - 23693.785: 99.9175% ( 5) 00:07:58.992 23693.785 - 23794.609: 99.9351% ( 3) 00:07:58.992 23794.609 - 23895.434: 99.9646% ( 5) 00:07:58.992 23895.434 - 23996.258: 99.9882% ( 4) 00:07:58.992 23996.258 - 24097.083: 100.0000% ( 2) 00:07:58.992 00:07:58.992 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.992 ============================================================================== 00:07:58.992 Range in us Cumulative IO count 00:07:58.992 3629.686 - 3654.892: 0.0059% ( 1) 00:07:58.992 3654.892 - 3680.098: 0.0177% ( 2) 00:07:58.992 3680.098 - 3705.305: 0.0354% ( 3) 00:07:58.992 3705.305 - 3730.511: 0.0472% ( 2) 00:07:58.992 3730.511 - 3755.717: 0.0708% ( 4) 00:07:58.992 3755.717 - 3780.923: 0.1002% ( 5) 00:07:58.992 3780.923 - 3806.129: 0.1297% ( 5) 00:07:58.992 3806.129 - 3831.335: 0.1710% ( 7) 00:07:58.992 3831.335 - 3856.542: 0.2300% ( 10) 00:07:58.992 3856.542 - 3881.748: 0.2535% ( 4) 00:07:58.992 3881.748 - 3906.954: 0.2712% ( 3) 00:07:58.992 3906.954 - 3932.160: 0.2948% ( 4) 00:07:58.992 3932.160 - 3957.366: 0.3066% ( 2) 00:07:58.992 3957.366 - 3982.572: 0.3243% ( 3) 00:07:58.992 3982.572 - 4007.778: 0.3361% ( 2) 00:07:58.992 4007.778 - 4032.985: 0.3479% ( 2) 00:07:58.992 4032.985 - 4058.191: 0.3597% ( 2) 00:07:58.992 4058.191 - 4083.397: 0.3715% ( 2) 00:07:58.992 4083.397 - 4108.603: 0.3774% ( 1) 00:07:58.992 5847.828 - 5873.034: 0.3833% ( 1) 00:07:58.992 5923.446 - 5948.652: 0.3892% ( 1) 00:07:58.992 5999.065 - 6024.271: 0.3950% ( 1) 00:07:58.992 6024.271 - 6049.477: 0.4245% ( 5) 00:07:58.992 6049.477 - 6074.683: 0.4894% ( 11) 00:07:58.992 6074.683 - 6099.889: 0.5660% ( 13) 00:07:58.992 6099.889 - 6125.095: 0.6781% ( 19) 00:07:58.992 6125.095 - 6150.302: 0.7960% ( 20) 00:07:58.992 6150.302 - 6175.508: 1.0849% ( 49) 00:07:58.992 6175.508 - 6200.714: 1.4505% ( 62) 00:07:58.992 6200.714 - 6225.920: 1.6981% ( 42) 00:07:58.992 6225.920 - 6251.126: 2.1875% ( 83) 00:07:58.992 6251.126 - 6276.332: 2.6887% ( 85) 00:07:58.992 6276.332 - 6301.538: 3.2606% ( 97) 00:07:58.992 6301.538 - 6326.745: 3.9976% ( 125) 00:07:58.992 6326.745 - 6351.951: 4.9410% ( 160) 00:07:58.992 6351.951 - 6377.157: 6.0142% ( 182) 00:07:58.992 6377.157 - 6402.363: 7.1285% ( 189) 00:07:58.992 6402.363 - 6427.569: 8.5849% ( 247) 00:07:58.992 6427.569 - 6452.775: 10.3361% ( 297) 00:07:58.992 6452.775 - 6503.188: 14.3986% ( 689) 00:07:58.992 6503.188 - 6553.600: 20.2594% ( 994) 00:07:58.992 6553.600 - 6604.012: 24.8231% ( 774) 00:07:58.992 6604.012 - 6654.425: 30.0177% ( 881) 00:07:58.992 6654.425 - 6704.837: 34.9410% ( 835) 00:07:58.992 6704.837 - 6755.249: 39.2689% ( 734) 00:07:58.992 6755.249 - 6805.662: 42.7830% ( 596) 00:07:58.992 6805.662 - 6856.074: 46.6274% ( 652) 00:07:58.992 6856.074 - 6906.486: 49.7465% ( 529) 00:07:58.992 6906.486 - 6956.898: 52.2288% ( 421) 00:07:58.992 6956.898 - 7007.311: 54.0920% ( 316) 00:07:58.992 7007.311 - 7057.723: 56.2854% ( 372) 00:07:58.992 7057.723 - 7108.135: 58.5024% ( 376) 00:07:58.992 7108.135 - 7158.548: 60.1533% ( 280) 00:07:58.992 7158.548 - 7208.960: 61.7099% ( 264) 00:07:58.992 7208.960 - 7259.372: 63.0542% ( 228) 00:07:58.992 7259.372 - 7309.785: 64.2512% ( 203) 00:07:58.992 7309.785 - 7360.197: 64.9823% ( 124) 00:07:58.992 7360.197 - 7410.609: 65.8903% ( 154) 00:07:58.992 7410.609 - 7461.022: 66.5802% ( 117) 00:07:58.992 7461.022 - 7511.434: 67.4292% ( 144) 00:07:58.992 7511.434 - 7561.846: 67.9717% ( 92) 00:07:58.992 7561.846 - 7612.258: 68.7028% ( 124) 00:07:58.992 7612.258 - 7662.671: 69.2276% ( 89) 00:07:58.992 7662.671 - 7713.083: 69.8526% ( 106) 00:07:58.992 7713.083 - 7763.495: 70.4540% ( 102) 00:07:58.992 7763.495 - 7813.908: 71.0200% ( 96) 00:07:58.992 7813.908 - 7864.320: 71.8514% ( 141) 00:07:58.992 7864.320 - 7914.732: 72.6651% ( 138) 00:07:58.992 7914.732 - 7965.145: 73.8856% ( 207) 00:07:58.992 7965.145 - 8015.557: 75.0354% ( 195) 00:07:58.992 8015.557 - 8065.969: 76.1851% ( 195) 00:07:58.992 8065.969 - 8116.382: 77.1757% ( 168) 00:07:58.992 8116.382 - 8166.794: 78.3373% ( 197) 00:07:58.992 8166.794 - 8217.206: 79.3632% ( 174) 00:07:58.992 8217.206 - 8267.618: 80.6899% ( 225) 00:07:58.992 8267.618 - 8318.031: 82.0106% ( 224) 00:07:58.992 8318.031 - 8368.443: 82.9009% ( 151) 00:07:58.992 8368.443 - 8418.855: 84.2040% ( 221) 00:07:58.992 8418.855 - 8469.268: 84.8762% ( 114) 00:07:58.992 8469.268 - 8519.680: 85.4304% ( 94) 00:07:58.992 8519.680 - 8570.092: 86.1085% ( 115) 00:07:58.992 8570.092 - 8620.505: 86.6156% ( 86) 00:07:58.992 8620.505 - 8670.917: 87.1639% ( 93) 00:07:58.992 8670.917 - 8721.329: 87.6592% ( 84) 00:07:58.992 8721.329 - 8771.742: 88.1250% ( 79) 00:07:58.992 8771.742 - 8822.154: 88.6498% ( 89) 00:07:58.992 8822.154 - 8872.566: 88.9682% ( 54) 00:07:58.992 8872.566 - 8922.978: 89.3455% ( 64) 00:07:58.992 8922.978 - 8973.391: 89.7406% ( 67) 00:07:58.992 8973.391 - 9023.803: 90.1002% ( 61) 00:07:58.992 9023.803 - 9074.215: 90.2594% ( 27) 00:07:58.992 9074.215 - 9124.628: 90.4363% ( 30) 00:07:58.992 9124.628 - 9175.040: 90.5955% ( 27) 00:07:58.992 9175.040 - 9225.452: 90.7783% ( 31) 00:07:58.992 9225.452 - 9275.865: 90.9906% ( 36) 00:07:58.992 9275.865 - 9326.277: 91.3267% ( 57) 00:07:58.992 9326.277 - 9376.689: 91.6686% ( 58) 00:07:58.992 9376.689 - 9427.102: 91.8042% ( 23) 00:07:58.992 9427.102 - 9477.514: 91.9340% ( 22) 00:07:58.992 9477.514 - 9527.926: 92.1226% ( 32) 00:07:58.992 9527.926 - 9578.338: 92.2170% ( 16) 00:07:58.992 9578.338 - 9628.751: 92.2759% ( 10) 00:07:58.993 9628.751 - 9679.163: 92.3585% ( 14) 00:07:58.993 9679.163 - 9729.575: 92.4351% ( 13) 00:07:58.993 9729.575 - 9779.988: 92.5000% ( 11) 00:07:58.993 9779.988 - 9830.400: 92.7358% ( 40) 00:07:58.993 9830.400 - 9880.812: 92.7889% ( 9) 00:07:58.993 9880.812 - 9931.225: 92.8184% ( 5) 00:07:58.993 9931.225 - 9981.637: 92.8597% ( 7) 00:07:58.993 9981.637 - 10032.049: 92.8950% ( 6) 00:07:58.993 10032.049 - 10082.462: 92.9658% ( 12) 00:07:58.993 10082.462 - 10132.874: 93.1073% ( 24) 00:07:58.993 10132.874 - 10183.286: 93.3078% ( 34) 00:07:58.993 10183.286 - 10233.698: 93.5083% ( 34) 00:07:58.993 10233.698 - 10284.111: 93.7205% ( 36) 00:07:58.993 10284.111 - 10334.523: 93.8620% ( 24) 00:07:58.993 10334.523 - 10384.935: 94.0153% ( 26) 00:07:58.993 10384.935 - 10435.348: 94.1392% ( 21) 00:07:58.993 10435.348 - 10485.760: 94.3514% ( 36) 00:07:58.993 10485.760 - 10536.172: 94.5460% ( 33) 00:07:58.993 10536.172 - 10586.585: 94.7818% ( 40) 00:07:58.993 10586.585 - 10636.997: 95.0649% ( 48) 00:07:58.993 10636.997 - 10687.409: 95.3302% ( 45) 00:07:58.993 10687.409 - 10737.822: 95.4953% ( 28) 00:07:58.993 10737.822 - 10788.234: 95.6840% ( 32) 00:07:58.993 10788.234 - 10838.646: 95.8844% ( 34) 00:07:58.993 10838.646 - 10889.058: 96.1144% ( 39) 00:07:58.993 10889.058 - 10939.471: 96.2972% ( 31) 00:07:58.993 10939.471 - 10989.883: 96.4033% ( 18) 00:07:58.993 10989.883 - 11040.295: 96.4741% ( 12) 00:07:58.993 11040.295 - 11090.708: 96.5389% ( 11) 00:07:58.993 11090.708 - 11141.120: 96.6097% ( 12) 00:07:58.993 11141.120 - 11191.532: 96.6450% ( 6) 00:07:58.993 11191.532 - 11241.945: 96.6745% ( 5) 00:07:58.993 11241.945 - 11292.357: 96.7099% ( 6) 00:07:58.993 11292.357 - 11342.769: 96.7925% ( 14) 00:07:58.993 11342.769 - 11393.182: 96.9104% ( 20) 00:07:58.993 11393.182 - 11443.594: 97.0991% ( 32) 00:07:58.993 11443.594 - 11494.006: 97.1698% ( 12) 00:07:58.993 11494.006 - 11544.418: 97.2288% ( 10) 00:07:58.993 11544.418 - 11594.831: 97.2759% ( 8) 00:07:58.993 11594.831 - 11645.243: 97.3172% ( 7) 00:07:58.993 11645.243 - 11695.655: 97.3408% ( 4) 00:07:58.993 11695.655 - 11746.068: 97.3585% ( 3) 00:07:58.993 11746.068 - 11796.480: 97.3644% ( 1) 00:07:58.993 11796.480 - 11846.892: 97.3762% ( 2) 00:07:58.993 11846.892 - 11897.305: 97.4175% ( 7) 00:07:58.993 11897.305 - 11947.717: 97.4764% ( 10) 00:07:58.993 11947.717 - 11998.129: 97.5472% ( 12) 00:07:58.993 11998.129 - 12048.542: 97.7358% ( 32) 00:07:58.993 12048.542 - 12098.954: 97.8007% ( 11) 00:07:58.993 12098.954 - 12149.366: 97.8538% ( 9) 00:07:58.993 12149.366 - 12199.778: 97.9186% ( 11) 00:07:58.993 12199.778 - 12250.191: 97.9658% ( 8) 00:07:58.993 12250.191 - 12300.603: 98.0307% ( 11) 00:07:58.993 12300.603 - 12351.015: 98.0837% ( 9) 00:07:58.993 12351.015 - 12401.428: 98.1427% ( 10) 00:07:58.993 12401.428 - 12451.840: 98.2017% ( 10) 00:07:58.993 12451.840 - 12502.252: 98.3019% ( 17) 00:07:58.993 12502.252 - 12552.665: 98.3844% ( 14) 00:07:58.993 12552.665 - 12603.077: 98.4316% ( 8) 00:07:58.993 12603.077 - 12653.489: 98.4788% ( 8) 00:07:58.993 12653.489 - 12703.902: 98.5200% ( 7) 00:07:58.993 12703.902 - 12754.314: 98.5554% ( 6) 00:07:58.993 12754.314 - 12804.726: 98.6085% ( 9) 00:07:58.993 12804.726 - 12855.138: 98.6557% ( 8) 00:07:58.993 12855.138 - 12905.551: 98.6792% ( 4) 00:07:58.993 12905.551 - 13006.375: 98.7677% ( 15) 00:07:58.993 13006.375 - 13107.200: 98.8502% ( 14) 00:07:58.993 13107.200 - 13208.025: 98.9564% ( 18) 00:07:58.993 13208.025 - 13308.849: 99.0035% ( 8) 00:07:58.993 13308.849 - 13409.674: 99.0507% ( 8) 00:07:58.993 13409.674 - 13510.498: 99.1038% ( 9) 00:07:58.993 13510.498 - 13611.323: 99.1509% ( 8) 00:07:58.993 13611.323 - 13712.148: 99.1863% ( 6) 00:07:58.993 13712.148 - 13812.972: 99.2040% ( 3) 00:07:58.993 13812.972 - 13913.797: 99.2217% ( 3) 00:07:58.993 13913.797 - 14014.622: 99.2394% ( 3) 00:07:58.993 14014.622 - 14115.446: 99.2453% ( 1) 00:07:58.993 16636.062 - 16736.886: 99.2512% ( 1) 00:07:58.993 16736.886 - 16837.711: 99.2748% ( 4) 00:07:58.993 16837.711 - 16938.535: 99.3101% ( 6) 00:07:58.993 16938.535 - 17039.360: 99.3455% ( 6) 00:07:58.993 17039.360 - 17140.185: 99.3809% ( 6) 00:07:58.993 17140.185 - 17241.009: 99.4163% ( 6) 00:07:58.993 17241.009 - 17341.834: 99.4575% ( 7) 00:07:58.993 17341.834 - 17442.658: 99.4870% ( 5) 00:07:58.993 17442.658 - 17543.483: 99.5224% ( 6) 00:07:58.993 17543.483 - 17644.308: 99.5401% ( 3) 00:07:58.993 17644.308 - 17745.132: 99.5578% ( 3) 00:07:58.993 17745.132 - 17845.957: 99.5755% ( 3) 00:07:58.993 17845.957 - 17946.782: 99.5932% ( 3) 00:07:58.993 17946.782 - 18047.606: 99.6167% ( 4) 00:07:58.993 18047.606 - 18148.431: 99.6226% ( 1) 00:07:58.993 22685.538 - 22786.363: 99.6403% ( 3) 00:07:58.993 22786.363 - 22887.188: 99.6875% ( 8) 00:07:58.993 22887.188 - 22988.012: 99.7347% ( 8) 00:07:58.993 22988.012 - 23088.837: 99.7818% ( 8) 00:07:58.993 23088.837 - 23189.662: 99.7936% ( 2) 00:07:58.993 23592.960 - 23693.785: 99.8054% ( 2) 00:07:58.993 23693.785 - 23794.609: 99.8172% ( 2) 00:07:58.993 23794.609 - 23895.434: 99.8349% ( 3) 00:07:58.993 23895.434 - 23996.258: 99.8467% ( 2) 00:07:58.993 23996.258 - 24097.083: 99.8703% ( 4) 00:07:58.993 24097.083 - 24197.908: 99.8998% ( 5) 00:07:58.993 24197.908 - 24298.732: 99.9233% ( 4) 00:07:58.993 24298.732 - 24399.557: 99.9469% ( 4) 00:07:58.993 24399.557 - 24500.382: 99.9705% ( 4) 00:07:58.993 24500.382 - 24601.206: 99.9941% ( 4) 00:07:58.993 24601.206 - 24702.031: 100.0000% ( 1) 00:07:58.993 00:07:58.993 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.993 ============================================================================== 00:07:58.993 Range in us Cumulative IO count 00:07:58.993 3377.625 - 3402.831: 0.0059% ( 1) 00:07:58.993 3402.831 - 3428.037: 0.0236% ( 3) 00:07:58.993 3428.037 - 3453.243: 0.0413% ( 3) 00:07:58.993 3453.243 - 3478.449: 0.1061% ( 11) 00:07:58.993 3478.449 - 3503.655: 0.1887% ( 14) 00:07:58.993 3503.655 - 3528.862: 0.2358% ( 8) 00:07:58.993 3528.862 - 3554.068: 0.2476% ( 2) 00:07:58.993 3554.068 - 3579.274: 0.2594% ( 2) 00:07:58.993 3579.274 - 3604.480: 0.2712% ( 2) 00:07:58.993 3604.480 - 3629.686: 0.2830% ( 2) 00:07:58.993 3629.686 - 3654.892: 0.2948% ( 2) 00:07:58.993 3654.892 - 3680.098: 0.3066% ( 2) 00:07:58.993 3680.098 - 3705.305: 0.3184% ( 2) 00:07:58.993 3705.305 - 3730.511: 0.3361% ( 3) 00:07:58.993 3730.511 - 3755.717: 0.3420% ( 1) 00:07:58.993 3755.717 - 3780.923: 0.3597% ( 3) 00:07:58.993 3780.923 - 3806.129: 0.3715% ( 2) 00:07:58.993 3806.129 - 3831.335: 0.3774% ( 1) 00:07:58.993 5646.178 - 5671.385: 0.3833% ( 1) 00:07:58.993 5772.209 - 5797.415: 0.4009% ( 3) 00:07:58.993 5797.415 - 5822.622: 0.4127% ( 2) 00:07:58.993 5822.622 - 5847.828: 0.4363% ( 4) 00:07:58.993 5847.828 - 5873.034: 0.4658% ( 5) 00:07:58.993 5873.034 - 5898.240: 0.5130% ( 8) 00:07:58.993 5898.240 - 5923.446: 0.6073% ( 16) 00:07:58.993 5923.446 - 5948.652: 0.6309% ( 4) 00:07:58.993 5948.652 - 5973.858: 0.6427% ( 2) 00:07:58.993 5973.858 - 5999.065: 0.6545% ( 2) 00:07:58.993 5999.065 - 6024.271: 0.6722% ( 3) 00:07:58.993 6024.271 - 6049.477: 0.7075% ( 6) 00:07:58.993 6049.477 - 6074.683: 0.7429% ( 6) 00:07:58.993 6074.683 - 6099.889: 0.7783% ( 6) 00:07:58.993 6099.889 - 6125.095: 0.9139% ( 23) 00:07:58.993 6125.095 - 6150.302: 1.0436% ( 22) 00:07:58.993 6150.302 - 6175.508: 1.1792% ( 23) 00:07:58.993 6175.508 - 6200.714: 1.3915% ( 36) 00:07:58.993 6200.714 - 6225.920: 1.7276% ( 57) 00:07:58.993 6225.920 - 6251.126: 2.0991% ( 63) 00:07:58.993 6251.126 - 6276.332: 2.5354% ( 74) 00:07:58.993 6276.332 - 6301.538: 3.0837% ( 93) 00:07:58.993 6301.538 - 6326.745: 3.7913% ( 120) 00:07:58.993 6326.745 - 6351.951: 4.7759% ( 167) 00:07:58.993 6351.951 - 6377.157: 5.7783% ( 170) 00:07:58.993 6377.157 - 6402.363: 6.9517% ( 199) 00:07:58.993 6402.363 - 6427.569: 8.8679% ( 325) 00:07:58.993 6427.569 - 6452.775: 10.8373% ( 334) 00:07:58.993 6452.775 - 6503.188: 15.0177% ( 709) 00:07:58.993 6503.188 - 6553.600: 19.1274% ( 697) 00:07:58.993 6553.600 - 6604.012: 23.8149% ( 795) 00:07:58.993 6604.012 - 6654.425: 29.7288% ( 1003) 00:07:58.993 6654.425 - 6704.837: 34.6580% ( 836) 00:07:58.993 6704.837 - 6755.249: 38.9564% ( 729) 00:07:58.993 6755.249 - 6805.662: 43.6026% ( 788) 00:07:58.993 6805.662 - 6856.074: 47.4646% ( 655) 00:07:58.993 6856.074 - 6906.486: 51.4446% ( 675) 00:07:58.993 6906.486 - 6956.898: 53.8325% ( 405) 00:07:58.993 6956.898 - 7007.311: 55.8432% ( 341) 00:07:58.993 7007.311 - 7057.723: 57.8125% ( 334) 00:07:58.993 7057.723 - 7108.135: 59.5519% ( 295) 00:07:58.993 7108.135 - 7158.548: 60.6722% ( 190) 00:07:58.993 7158.548 - 7208.960: 61.6450% ( 165) 00:07:58.993 7208.960 - 7259.372: 62.5472% ( 153) 00:07:58.993 7259.372 - 7309.785: 63.8090% ( 214) 00:07:58.993 7309.785 - 7360.197: 64.6344% ( 140) 00:07:58.993 7360.197 - 7410.609: 65.3361% ( 119) 00:07:58.993 7410.609 - 7461.022: 66.2559% ( 156) 00:07:58.993 7461.022 - 7511.434: 66.9517% ( 118) 00:07:58.993 7511.434 - 7561.846: 67.7712% ( 139) 00:07:58.993 7561.846 - 7612.258: 68.4906% ( 122) 00:07:58.993 7612.258 - 7662.671: 69.3691% ( 149) 00:07:58.993 7662.671 - 7713.083: 70.1592% ( 134) 00:07:58.993 7713.083 - 7763.495: 70.9788% ( 139) 00:07:58.993 7763.495 - 7813.908: 71.7453% ( 130) 00:07:58.994 7813.908 - 7864.320: 72.5118% ( 130) 00:07:58.994 7864.320 - 7914.732: 73.3785% ( 147) 00:07:58.994 7914.732 - 7965.145: 74.2571% ( 149) 00:07:58.994 7965.145 - 8015.557: 75.3066% ( 178) 00:07:58.994 8015.557 - 8065.969: 76.4269% ( 190) 00:07:58.994 8065.969 - 8116.382: 77.6002% ( 199) 00:07:58.994 8116.382 - 8166.794: 78.7618% ( 197) 00:07:58.994 8166.794 - 8217.206: 79.9057% ( 194) 00:07:58.994 8217.206 - 8267.618: 80.9611% ( 179) 00:07:58.994 8267.618 - 8318.031: 82.2465% ( 218) 00:07:58.994 8318.031 - 8368.443: 83.2665% ( 173) 00:07:58.994 8368.443 - 8418.855: 84.0389% ( 131) 00:07:58.994 8418.855 - 8469.268: 84.9469% ( 154) 00:07:58.994 8469.268 - 8519.680: 85.5248% ( 98) 00:07:58.994 8519.680 - 8570.092: 86.0142% ( 83) 00:07:58.994 8570.092 - 8620.505: 86.5861% ( 97) 00:07:58.994 8620.505 - 8670.917: 86.9281% ( 58) 00:07:58.994 8670.917 - 8721.329: 87.3349% ( 69) 00:07:58.994 8721.329 - 8771.742: 87.8479% ( 87) 00:07:58.994 8771.742 - 8822.154: 88.3550% ( 86) 00:07:58.994 8822.154 - 8872.566: 88.7913% ( 74) 00:07:58.994 8872.566 - 8922.978: 89.1450% ( 60) 00:07:58.994 8922.978 - 8973.391: 89.6639% ( 88) 00:07:58.994 8973.391 - 9023.803: 90.0236% ( 61) 00:07:58.994 9023.803 - 9074.215: 90.2594% ( 40) 00:07:58.994 9074.215 - 9124.628: 90.5483% ( 49) 00:07:58.994 9124.628 - 9175.040: 90.7488% ( 34) 00:07:58.994 9175.040 - 9225.452: 90.8667% ( 20) 00:07:58.994 9225.452 - 9275.865: 90.9729% ( 18) 00:07:58.994 9275.865 - 9326.277: 91.0672% ( 16) 00:07:58.994 9326.277 - 9376.689: 91.3267% ( 44) 00:07:58.994 9376.689 - 9427.102: 91.4682% ( 24) 00:07:58.994 9427.102 - 9477.514: 91.6333% ( 28) 00:07:58.994 9477.514 - 9527.926: 91.7335% ( 17) 00:07:58.994 9527.926 - 9578.338: 91.8337% ( 17) 00:07:58.994 9578.338 - 9628.751: 92.0755% ( 41) 00:07:58.994 9628.751 - 9679.163: 92.1639% ( 15) 00:07:58.994 9679.163 - 9729.575: 92.3231% ( 27) 00:07:58.994 9729.575 - 9779.988: 92.4116% ( 15) 00:07:58.994 9779.988 - 9830.400: 92.5000% ( 15) 00:07:58.994 9830.400 - 9880.812: 92.8774% ( 64) 00:07:58.994 9880.812 - 9931.225: 93.0248% ( 25) 00:07:58.994 9931.225 - 9981.637: 93.1840% ( 27) 00:07:58.994 9981.637 - 10032.049: 93.2783% ( 16) 00:07:58.994 10032.049 - 10082.462: 93.3314% ( 9) 00:07:58.994 10082.462 - 10132.874: 93.3903% ( 10) 00:07:58.994 10132.874 - 10183.286: 93.4788% ( 15) 00:07:58.994 10183.286 - 10233.698: 93.6144% ( 23) 00:07:58.994 10233.698 - 10284.111: 93.7854% ( 29) 00:07:58.994 10284.111 - 10334.523: 93.9741% ( 32) 00:07:58.994 10334.523 - 10384.935: 94.1804% ( 35) 00:07:58.994 10384.935 - 10435.348: 94.4281% ( 42) 00:07:58.994 10435.348 - 10485.760: 94.6403% ( 36) 00:07:58.994 10485.760 - 10536.172: 94.7877% ( 25) 00:07:58.994 10536.172 - 10586.585: 94.9057% ( 20) 00:07:58.994 10586.585 - 10636.997: 95.0000% ( 16) 00:07:58.994 10636.997 - 10687.409: 95.0825% ( 14) 00:07:58.994 10687.409 - 10737.822: 95.1592% ( 13) 00:07:58.994 10737.822 - 10788.234: 95.2241% ( 11) 00:07:58.994 10788.234 - 10838.646: 95.3420% ( 20) 00:07:58.994 10838.646 - 10889.058: 95.5071% ( 28) 00:07:58.994 10889.058 - 10939.471: 95.7134% ( 35) 00:07:58.994 10939.471 - 10989.883: 95.9021% ( 32) 00:07:58.994 10989.883 - 11040.295: 96.0142% ( 19) 00:07:58.994 11040.295 - 11090.708: 96.0908% ( 13) 00:07:58.994 11090.708 - 11141.120: 96.1851% ( 16) 00:07:58.994 11141.120 - 11191.532: 96.2618% ( 13) 00:07:58.994 11191.532 - 11241.945: 96.4917% ( 39) 00:07:58.994 11241.945 - 11292.357: 96.6686% ( 30) 00:07:58.994 11292.357 - 11342.769: 96.7453% ( 13) 00:07:58.994 11342.769 - 11393.182: 96.8455% ( 17) 00:07:58.994 11393.182 - 11443.594: 96.9281% ( 14) 00:07:58.994 11443.594 - 11494.006: 96.9988% ( 12) 00:07:58.994 11494.006 - 11544.418: 97.0519% ( 9) 00:07:58.994 11544.418 - 11594.831: 97.1285% ( 13) 00:07:58.994 11594.831 - 11645.243: 97.2111% ( 14) 00:07:58.994 11645.243 - 11695.655: 97.2936% ( 14) 00:07:58.994 11695.655 - 11746.068: 97.3762% ( 14) 00:07:58.994 11746.068 - 11796.480: 97.4175% ( 7) 00:07:58.994 11796.480 - 11846.892: 97.4646% ( 8) 00:07:58.994 11846.892 - 11897.305: 97.5118% ( 8) 00:07:58.994 11897.305 - 11947.717: 97.5708% ( 10) 00:07:58.994 11947.717 - 11998.129: 97.6356% ( 11) 00:07:58.994 11998.129 - 12048.542: 97.7241% ( 15) 00:07:58.994 12048.542 - 12098.954: 97.7948% ( 12) 00:07:58.994 12098.954 - 12149.366: 97.8361% ( 7) 00:07:58.994 12149.366 - 12199.778: 97.8774% ( 7) 00:07:58.994 12199.778 - 12250.191: 97.9245% ( 8) 00:07:58.994 12250.191 - 12300.603: 97.9953% ( 12) 00:07:58.994 12300.603 - 12351.015: 98.0837% ( 15) 00:07:58.994 12351.015 - 12401.428: 98.2842% ( 34) 00:07:58.994 12401.428 - 12451.840: 98.4257% ( 24) 00:07:58.994 12451.840 - 12502.252: 98.5259% ( 17) 00:07:58.994 12502.252 - 12552.665: 98.6557% ( 22) 00:07:58.994 12552.665 - 12603.077: 98.7323% ( 13) 00:07:58.994 12603.077 - 12653.489: 98.7618% ( 5) 00:07:58.994 12653.489 - 12703.902: 98.7972% ( 6) 00:07:58.994 12703.902 - 12754.314: 98.8267% ( 5) 00:07:58.994 12754.314 - 12804.726: 98.8502% ( 4) 00:07:58.994 12804.726 - 12855.138: 98.8738% ( 4) 00:07:58.994 12855.138 - 12905.551: 98.8974% ( 4) 00:07:58.994 12905.551 - 13006.375: 98.9328% ( 6) 00:07:58.994 13006.375 - 13107.200: 98.9741% ( 7) 00:07:58.994 13107.200 - 13208.025: 99.0153% ( 7) 00:07:58.994 13208.025 - 13308.849: 99.0625% ( 8) 00:07:58.994 13308.849 - 13409.674: 99.1038% ( 7) 00:07:58.994 13409.674 - 13510.498: 99.1568% ( 9) 00:07:58.994 13510.498 - 13611.323: 99.2040% ( 8) 00:07:58.994 13611.323 - 13712.148: 99.2453% ( 7) 00:07:58.994 16636.062 - 16736.886: 99.2748% ( 5) 00:07:58.994 16736.886 - 16837.711: 99.3101% ( 6) 00:07:58.994 16837.711 - 16938.535: 99.3396% ( 5) 00:07:58.994 16938.535 - 17039.360: 99.3691% ( 5) 00:07:58.994 17039.360 - 17140.185: 99.4045% ( 6) 00:07:58.994 17140.185 - 17241.009: 99.4340% ( 5) 00:07:58.994 17241.009 - 17341.834: 99.4693% ( 6) 00:07:58.994 17341.834 - 17442.658: 99.4988% ( 5) 00:07:58.994 17442.658 - 17543.483: 99.5283% ( 5) 00:07:58.994 17543.483 - 17644.308: 99.5460% ( 3) 00:07:58.994 17644.308 - 17745.132: 99.5696% ( 4) 00:07:58.994 17745.132 - 17845.957: 99.5814% ( 2) 00:07:58.994 17845.957 - 17946.782: 99.5991% ( 3) 00:07:58.994 17946.782 - 18047.606: 99.6226% ( 4) 00:07:58.994 22685.538 - 22786.363: 99.6285% ( 1) 00:07:58.994 22786.363 - 22887.188: 99.6403% ( 2) 00:07:58.994 22887.188 - 22988.012: 99.6580% ( 3) 00:07:58.994 22988.012 - 23088.837: 99.6698% ( 2) 00:07:58.994 23088.837 - 23189.662: 99.6816% ( 2) 00:07:58.994 23189.662 - 23290.486: 99.7229% ( 7) 00:07:58.994 23290.486 - 23391.311: 99.7995% ( 13) 00:07:58.994 23391.311 - 23492.135: 99.8585% ( 10) 00:07:58.994 23492.135 - 23592.960: 99.8762% ( 3) 00:07:58.994 23592.960 - 23693.785: 99.8939% ( 3) 00:07:58.994 23693.785 - 23794.609: 99.9175% ( 4) 00:07:58.994 23794.609 - 23895.434: 99.9351% ( 3) 00:07:58.994 23895.434 - 23996.258: 99.9528% ( 3) 00:07:58.994 23996.258 - 24097.083: 99.9764% ( 4) 00:07:58.994 24097.083 - 24197.908: 100.0000% ( 4) 00:07:58.994 00:07:58.994 03:08:02 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:58.994 00:07:58.994 real 0m2.457s 00:07:58.994 user 0m2.153s 00:07:58.994 sys 0m0.190s 00:07:58.994 03:08:02 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.994 03:08:02 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.994 ************************************ 00:07:58.994 END TEST nvme_perf 00:07:58.994 ************************************ 00:07:58.994 03:08:02 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:58.994 03:08:02 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:58.994 03:08:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.994 03:08:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.994 ************************************ 00:07:58.994 START TEST nvme_hello_world 00:07:58.994 ************************************ 00:07:58.994 03:08:02 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:59.253 Initializing NVMe Controllers 00:07:59.253 Attached to 0000:00:11.0 00:07:59.253 Namespace ID: 1 size: 5GB 00:07:59.253 Attached to 0000:00:13.0 00:07:59.253 Namespace ID: 1 size: 1GB 00:07:59.253 Attached to 0000:00:10.0 00:07:59.253 Namespace ID: 1 size: 6GB 00:07:59.253 Attached to 0000:00:12.0 00:07:59.253 Namespace ID: 1 size: 4GB 00:07:59.253 Namespace ID: 2 size: 4GB 00:07:59.253 Namespace ID: 3 size: 4GB 00:07:59.253 Initialization complete. 00:07:59.253 INFO: using host memory buffer for IO 00:07:59.253 Hello world! 00:07:59.253 INFO: using host memory buffer for IO 00:07:59.253 Hello world! 00:07:59.253 INFO: using host memory buffer for IO 00:07:59.253 Hello world! 00:07:59.253 INFO: using host memory buffer for IO 00:07:59.253 Hello world! 00:07:59.253 INFO: using host memory buffer for IO 00:07:59.253 Hello world! 00:07:59.253 INFO: using host memory buffer for IO 00:07:59.253 Hello world! 00:07:59.253 00:07:59.253 real 0m0.183s 00:07:59.253 user 0m0.060s 00:07:59.253 sys 0m0.081s 00:07:59.253 03:08:02 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.253 03:08:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:59.253 ************************************ 00:07:59.253 END TEST nvme_hello_world 00:07:59.253 ************************************ 00:07:59.253 03:08:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:59.253 03:08:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.253 03:08:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.253 03:08:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.253 ************************************ 00:07:59.253 START TEST nvme_sgl 00:07:59.253 ************************************ 00:07:59.253 03:08:02 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:59.511 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:59.511 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:59.511 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:59.511 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:59.511 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:59.511 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:59.511 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:59.511 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:59.511 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:59.511 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:59.511 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:59.511 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:59.511 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:59.512 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:59.512 NVMe Readv/Writev Request test 00:07:59.512 Attached to 0000:00:11.0 00:07:59.512 Attached to 0000:00:13.0 00:07:59.512 Attached to 0000:00:10.0 00:07:59.512 Attached to 0000:00:12.0 00:07:59.512 0000:00:11.0: build_io_request_2 test passed 00:07:59.512 0000:00:11.0: build_io_request_4 test passed 00:07:59.512 0000:00:11.0: build_io_request_5 test passed 00:07:59.512 0000:00:11.0: build_io_request_6 test passed 00:07:59.512 0000:00:11.0: build_io_request_7 test passed 00:07:59.512 0000:00:11.0: build_io_request_10 test passed 00:07:59.512 0000:00:10.0: build_io_request_2 test passed 00:07:59.512 0000:00:10.0: build_io_request_4 test passed 00:07:59.512 0000:00:10.0: build_io_request_5 test passed 00:07:59.512 0000:00:10.0: build_io_request_6 test passed 00:07:59.512 0000:00:10.0: build_io_request_7 test passed 00:07:59.512 0000:00:10.0: build_io_request_10 test passed 00:07:59.512 Cleaning up... 00:07:59.512 00:07:59.512 real 0m0.250s 00:07:59.512 user 0m0.124s 00:07:59.512 sys 0m0.079s 00:07:59.512 03:08:03 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.512 03:08:03 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:59.512 ************************************ 00:07:59.512 END TEST nvme_sgl 00:07:59.512 ************************************ 00:07:59.512 03:08:03 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:59.512 03:08:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.512 03:08:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.512 03:08:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.512 ************************************ 00:07:59.512 START TEST nvme_e2edp 00:07:59.512 ************************************ 00:07:59.512 03:08:03 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:59.771 NVMe Write/Read with End-to-End data protection test 00:07:59.771 Attached to 0000:00:11.0 00:07:59.771 Attached to 0000:00:13.0 00:07:59.771 Attached to 0000:00:10.0 00:07:59.771 Attached to 0000:00:12.0 00:07:59.771 Cleaning up... 00:07:59.771 ************************************ 00:07:59.771 END TEST nvme_e2edp 00:07:59.771 ************************************ 00:07:59.771 00:07:59.771 real 0m0.188s 00:07:59.771 user 0m0.065s 00:07:59.771 sys 0m0.074s 00:07:59.771 03:08:03 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.771 03:08:03 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:59.771 03:08:03 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:59.771 03:08:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.771 03:08:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.771 03:08:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.771 ************************************ 00:07:59.771 START TEST nvme_reserve 00:07:59.771 ************************************ 00:07:59.771 03:08:03 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:00.030 ===================================================== 00:08:00.030 NVMe Controller at PCI bus 0, device 17, function 0 00:08:00.030 ===================================================== 00:08:00.030 Reservations: Not Supported 00:08:00.030 ===================================================== 00:08:00.030 NVMe Controller at PCI bus 0, device 19, function 0 00:08:00.030 ===================================================== 00:08:00.030 Reservations: Not Supported 00:08:00.030 ===================================================== 00:08:00.030 NVMe Controller at PCI bus 0, device 16, function 0 00:08:00.030 ===================================================== 00:08:00.030 Reservations: Not Supported 00:08:00.030 ===================================================== 00:08:00.030 NVMe Controller at PCI bus 0, device 18, function 0 00:08:00.030 ===================================================== 00:08:00.030 Reservations: Not Supported 00:08:00.030 Reservation test passed 00:08:00.030 00:08:00.030 real 0m0.172s 00:08:00.030 user 0m0.058s 00:08:00.030 sys 0m0.075s 00:08:00.030 03:08:03 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.030 ************************************ 00:08:00.030 END TEST nvme_reserve 00:08:00.030 ************************************ 00:08:00.030 03:08:03 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:00.030 03:08:03 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:00.030 03:08:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.030 03:08:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.030 03:08:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.030 ************************************ 00:08:00.030 START TEST nvme_err_injection 00:08:00.030 ************************************ 00:08:00.030 03:08:03 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:00.289 NVMe Error Injection test 00:08:00.289 Attached to 0000:00:11.0 00:08:00.289 Attached to 0000:00:13.0 00:08:00.289 Attached to 0000:00:10.0 00:08:00.289 Attached to 0000:00:12.0 00:08:00.289 0000:00:11.0: get features failed as expected 00:08:00.289 0000:00:13.0: get features failed as expected 00:08:00.289 0000:00:10.0: get features failed as expected 00:08:00.289 0000:00:12.0: get features failed as expected 00:08:00.289 0000:00:11.0: get features successfully as expected 00:08:00.289 0000:00:13.0: get features successfully as expected 00:08:00.289 0000:00:10.0: get features successfully as expected 00:08:00.289 0000:00:12.0: get features successfully as expected 00:08:00.289 0000:00:13.0: read failed as expected 00:08:00.289 0000:00:10.0: read failed as expected 00:08:00.289 0000:00:11.0: read failed as expected 00:08:00.289 0000:00:12.0: read failed as expected 00:08:00.289 0000:00:12.0: read successfully as expected 00:08:00.289 0000:00:11.0: read successfully as expected 00:08:00.289 0000:00:13.0: read successfully as expected 00:08:00.289 0000:00:10.0: read successfully as expected 00:08:00.289 Cleaning up... 00:08:00.289 00:08:00.289 real 0m0.188s 00:08:00.289 user 0m0.062s 00:08:00.289 sys 0m0.081s 00:08:00.289 03:08:03 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.289 03:08:03 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:00.289 ************************************ 00:08:00.289 END TEST nvme_err_injection 00:08:00.289 ************************************ 00:08:00.289 03:08:03 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:00.289 03:08:03 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:00.289 03:08:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.289 03:08:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.289 ************************************ 00:08:00.289 START TEST nvme_overhead 00:08:00.289 ************************************ 00:08:00.289 03:08:03 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:01.673 Initializing NVMe Controllers 00:08:01.673 Attached to 0000:00:11.0 00:08:01.673 Attached to 0000:00:13.0 00:08:01.673 Attached to 0000:00:10.0 00:08:01.673 Attached to 0000:00:12.0 00:08:01.673 Initialization complete. Launching workers. 00:08:01.673 submit (in ns) avg, min, max = 11357.5, 9683.1, 295050.8 00:08:01.673 complete (in ns) avg, min, max = 7806.5, 7200.0, 312000.8 00:08:01.673 00:08:01.673 Submit histogram 00:08:01.673 ================ 00:08:01.673 Range in us Cumulative Count 00:08:01.673 9.649 - 9.698: 0.0060% ( 1) 00:08:01.673 9.797 - 9.846: 0.0120% ( 1) 00:08:01.673 10.191 - 10.240: 0.0179% ( 1) 00:08:01.673 10.535 - 10.585: 0.0299% ( 2) 00:08:01.673 10.585 - 10.634: 0.1613% ( 22) 00:08:01.673 10.634 - 10.683: 0.9501% ( 132) 00:08:01.673 10.683 - 10.732: 3.4658% ( 421) 00:08:01.673 10.732 - 10.782: 8.4314% ( 831) 00:08:01.673 10.782 - 10.831: 17.1736% ( 1463) 00:08:01.673 10.831 - 10.880: 28.6465% ( 1920) 00:08:01.673 10.880 - 10.929: 40.7171% ( 2020) 00:08:01.673 10.929 - 10.978: 52.2737% ( 1934) 00:08:01.673 10.978 - 11.028: 61.8942% ( 1610) 00:08:01.673 11.028 - 11.077: 69.8656% ( 1334) 00:08:01.673 11.077 - 11.126: 76.2414% ( 1067) 00:08:01.673 11.126 - 11.175: 80.8186% ( 766) 00:08:01.673 11.175 - 11.225: 83.7347% ( 488) 00:08:01.673 11.225 - 11.274: 85.6110% ( 314) 00:08:01.673 11.274 - 11.323: 86.7463% ( 190) 00:08:01.673 11.323 - 11.372: 87.4634% ( 120) 00:08:01.673 11.372 - 11.422: 88.0370% ( 96) 00:08:01.673 11.422 - 11.471: 88.4075% ( 62) 00:08:01.673 11.471 - 11.520: 88.7780% ( 62) 00:08:01.673 11.520 - 11.569: 89.0828% ( 51) 00:08:01.673 11.569 - 11.618: 89.3337% ( 42) 00:08:01.673 11.618 - 11.668: 89.6026% ( 45) 00:08:01.673 11.668 - 11.717: 89.8954% ( 49) 00:08:01.673 11.717 - 11.766: 90.2659% ( 62) 00:08:01.673 11.766 - 11.815: 90.5707% ( 51) 00:08:01.673 11.815 - 11.865: 90.8874% ( 53) 00:08:01.673 11.865 - 11.914: 91.2280% ( 57) 00:08:01.673 11.914 - 11.963: 91.6881% ( 77) 00:08:01.673 11.963 - 12.012: 92.1243% ( 73) 00:08:01.673 12.012 - 12.062: 92.7935% ( 112) 00:08:01.673 12.062 - 12.111: 93.4688% ( 113) 00:08:01.673 12.111 - 12.160: 94.0305% ( 94) 00:08:01.673 12.160 - 12.209: 94.5623% ( 89) 00:08:01.673 12.209 - 12.258: 95.0881% ( 88) 00:08:01.673 12.258 - 12.308: 95.4108% ( 54) 00:08:01.673 12.308 - 12.357: 95.7036% ( 49) 00:08:01.673 12.357 - 12.406: 95.8351% ( 22) 00:08:01.673 12.406 - 12.455: 95.9307% ( 16) 00:08:01.673 12.455 - 12.505: 96.0143% ( 14) 00:08:01.673 12.505 - 12.554: 96.0801% ( 11) 00:08:01.673 12.554 - 12.603: 96.1099% ( 5) 00:08:01.673 12.603 - 12.702: 96.1697% ( 10) 00:08:01.673 12.702 - 12.800: 96.2414% ( 12) 00:08:01.673 12.800 - 12.898: 96.3549% ( 19) 00:08:01.673 12.898 - 12.997: 96.5223% ( 28) 00:08:01.673 12.997 - 13.095: 96.6597% ( 23) 00:08:01.673 13.095 - 13.194: 96.7792% ( 20) 00:08:01.673 13.194 - 13.292: 96.9047% ( 21) 00:08:01.673 13.292 - 13.391: 97.0182% ( 19) 00:08:01.673 13.391 - 13.489: 97.1616% ( 24) 00:08:01.673 13.489 - 13.588: 97.2632% ( 17) 00:08:01.674 13.588 - 13.686: 97.3230% ( 10) 00:08:01.674 13.686 - 13.785: 97.3827% ( 10) 00:08:01.674 13.785 - 13.883: 97.4485% ( 11) 00:08:01.674 13.883 - 13.982: 97.4843% ( 6) 00:08:01.674 13.982 - 14.080: 97.5321% ( 8) 00:08:01.674 14.080 - 14.178: 97.5500% ( 3) 00:08:01.674 14.178 - 14.277: 97.5799% ( 5) 00:08:01.674 14.277 - 14.375: 97.6158% ( 6) 00:08:01.674 14.375 - 14.474: 97.6397% ( 4) 00:08:01.674 14.474 - 14.572: 97.6636% ( 4) 00:08:01.674 14.572 - 14.671: 97.6994% ( 6) 00:08:01.674 14.671 - 14.769: 97.7472% ( 8) 00:08:01.674 14.769 - 14.868: 97.7652% ( 3) 00:08:01.674 14.868 - 14.966: 97.7771% ( 2) 00:08:01.674 14.966 - 15.065: 97.8249% ( 8) 00:08:01.674 15.065 - 15.163: 97.8667% ( 7) 00:08:01.674 15.163 - 15.262: 97.9026% ( 6) 00:08:01.674 15.262 - 15.360: 97.9146% ( 2) 00:08:01.674 15.360 - 15.458: 97.9385% ( 4) 00:08:01.674 15.458 - 15.557: 97.9803% ( 7) 00:08:01.674 15.557 - 15.655: 97.9922% ( 2) 00:08:01.674 15.655 - 15.754: 98.0102% ( 3) 00:08:01.674 15.754 - 15.852: 98.0221% ( 2) 00:08:01.674 15.852 - 15.951: 98.0341% ( 2) 00:08:01.674 15.951 - 16.049: 98.0520% ( 3) 00:08:01.674 16.049 - 16.148: 98.0699% ( 3) 00:08:01.674 16.148 - 16.246: 98.0819% ( 2) 00:08:01.674 16.246 - 16.345: 98.0938% ( 2) 00:08:01.674 16.345 - 16.443: 98.1356% ( 7) 00:08:01.674 16.443 - 16.542: 98.1834% ( 8) 00:08:01.674 16.542 - 16.640: 98.2313% ( 8) 00:08:01.674 16.640 - 16.738: 98.2850% ( 9) 00:08:01.674 16.738 - 16.837: 98.3508% ( 11) 00:08:01.674 16.837 - 16.935: 98.3926% ( 7) 00:08:01.674 16.935 - 17.034: 98.4284% ( 6) 00:08:01.674 17.034 - 17.132: 98.5001% ( 12) 00:08:01.674 17.132 - 17.231: 98.5599% ( 10) 00:08:01.674 17.231 - 17.329: 98.6495% ( 15) 00:08:01.674 17.329 - 17.428: 98.7212% ( 12) 00:08:01.674 17.428 - 17.526: 98.7929% ( 12) 00:08:01.674 17.526 - 17.625: 98.8527% ( 10) 00:08:01.674 17.625 - 17.723: 98.8945% ( 7) 00:08:01.674 17.723 - 17.822: 98.9423% ( 8) 00:08:01.674 17.822 - 17.920: 98.9603% ( 3) 00:08:01.674 17.920 - 18.018: 99.0200% ( 10) 00:08:01.674 18.018 - 18.117: 99.0559% ( 6) 00:08:01.674 18.117 - 18.215: 99.0738% ( 3) 00:08:01.674 18.215 - 18.314: 99.0917% ( 3) 00:08:01.674 18.314 - 18.412: 99.1037% ( 2) 00:08:01.674 18.412 - 18.511: 99.1276% ( 4) 00:08:01.674 18.511 - 18.609: 99.1395% ( 2) 00:08:01.674 18.609 - 18.708: 99.1455% ( 1) 00:08:01.674 18.708 - 18.806: 99.1634% ( 3) 00:08:01.674 18.905 - 19.003: 99.1754% ( 2) 00:08:01.674 19.003 - 19.102: 99.1814% ( 1) 00:08:01.674 19.397 - 19.495: 99.1873% ( 1) 00:08:01.674 19.495 - 19.594: 99.1933% ( 1) 00:08:01.674 20.578 - 20.677: 99.1993% ( 1) 00:08:01.674 21.169 - 21.268: 99.2112% ( 2) 00:08:01.674 21.268 - 21.366: 99.2232% ( 2) 00:08:01.674 21.662 - 21.760: 99.2292% ( 1) 00:08:01.674 21.760 - 21.858: 99.2351% ( 1) 00:08:01.674 22.252 - 22.351: 99.2411% ( 1) 00:08:01.674 22.646 - 22.745: 99.2471% ( 1) 00:08:01.674 23.138 - 23.237: 99.2531% ( 1) 00:08:01.674 23.434 - 23.532: 99.2650% ( 2) 00:08:01.674 23.631 - 23.729: 99.2710% ( 1) 00:08:01.674 23.729 - 23.828: 99.2770% ( 1) 00:08:01.674 24.320 - 24.418: 99.2829% ( 1) 00:08:01.674 24.418 - 24.517: 99.2889% ( 1) 00:08:01.674 24.714 - 24.812: 99.2949% ( 1) 00:08:01.674 26.191 - 26.388: 99.3009% ( 1) 00:08:01.674 26.388 - 26.585: 99.3068% ( 1) 00:08:01.674 27.766 - 27.963: 99.3128% ( 1) 00:08:01.674 28.357 - 28.554: 99.3188% ( 1) 00:08:01.674 30.326 - 30.523: 99.3248% ( 1) 00:08:01.674 30.523 - 30.720: 99.3307% ( 1) 00:08:01.674 30.720 - 30.917: 99.3606% ( 5) 00:08:01.674 30.917 - 31.114: 99.5040% ( 24) 00:08:01.674 31.114 - 31.311: 99.6952% ( 32) 00:08:01.674 31.311 - 31.508: 99.7610% ( 11) 00:08:01.674 31.508 - 31.705: 99.7729% ( 2) 00:08:01.674 31.705 - 31.902: 99.7789% ( 1) 00:08:01.674 31.902 - 32.098: 99.8148% ( 6) 00:08:01.674 32.098 - 32.295: 99.8566% ( 7) 00:08:01.674 32.295 - 32.492: 99.8685% ( 2) 00:08:01.674 32.492 - 32.689: 99.8805% ( 2) 00:08:01.674 32.689 - 32.886: 99.8924% ( 2) 00:08:01.674 32.886 - 33.083: 99.8984% ( 1) 00:08:01.674 33.477 - 33.674: 99.9044% ( 1) 00:08:01.674 37.415 - 37.612: 99.9104% ( 1) 00:08:01.674 38.597 - 38.794: 99.9163% ( 1) 00:08:01.674 41.354 - 41.551: 99.9223% ( 1) 00:08:01.674 46.277 - 46.474: 99.9283% ( 1) 00:08:01.674 47.065 - 47.262: 99.9343% ( 1) 00:08:01.674 48.049 - 48.246: 99.9402% ( 1) 00:08:01.674 49.428 - 49.625: 99.9522% ( 2) 00:08:01.674 53.169 - 53.563: 99.9582% ( 1) 00:08:01.674 57.895 - 58.289: 99.9701% ( 2) 00:08:01.674 58.683 - 59.077: 99.9761% ( 1) 00:08:01.674 68.529 - 68.923: 99.9821% ( 1) 00:08:01.674 72.862 - 73.255: 99.9880% ( 1) 00:08:01.674 100.825 - 101.612: 99.9940% ( 1) 00:08:01.674 294.597 - 296.172: 100.0000% ( 1) 00:08:01.674 00:08:01.674 Complete histogram 00:08:01.674 ================== 00:08:01.674 Range in us Cumulative Count 00:08:01.674 7.188 - 7.237: 0.0299% ( 5) 00:08:01.674 7.237 - 7.286: 0.4840% ( 76) 00:08:01.674 7.286 - 7.335: 3.1132% ( 440) 00:08:01.674 7.335 - 7.385: 9.3158% ( 1038) 00:08:01.674 7.385 - 7.434: 19.5339% ( 1710) 00:08:01.674 7.434 - 7.483: 34.8670% ( 2566) 00:08:01.674 7.483 - 7.532: 53.6241% ( 3139) 00:08:01.674 7.532 - 7.582: 70.7619% ( 2868) 00:08:01.674 7.582 - 7.631: 82.5993% ( 1981) 00:08:01.674 7.631 - 7.680: 89.0469% ( 1079) 00:08:01.674 7.680 - 7.729: 92.5605% ( 588) 00:08:01.674 7.729 - 7.778: 94.1440% ( 265) 00:08:01.674 7.778 - 7.828: 94.8551% ( 119) 00:08:01.674 7.828 - 7.877: 95.1479% ( 49) 00:08:01.674 7.877 - 7.926: 95.3511% ( 34) 00:08:01.674 7.926 - 7.975: 95.4228% ( 12) 00:08:01.674 7.975 - 8.025: 95.5423% ( 20) 00:08:01.674 8.025 - 8.074: 95.6140% ( 12) 00:08:01.674 8.074 - 8.123: 95.7693% ( 26) 00:08:01.674 8.123 - 8.172: 95.8889% ( 20) 00:08:01.674 8.172 - 8.222: 96.0143% ( 21) 00:08:01.674 8.222 - 8.271: 96.2952% ( 47) 00:08:01.674 8.271 - 8.320: 96.5043% ( 35) 00:08:01.674 8.320 - 8.369: 96.6597% ( 26) 00:08:01.674 8.369 - 8.418: 96.8569% ( 33) 00:08:01.674 8.418 - 8.468: 96.9764% ( 20) 00:08:01.674 8.468 - 8.517: 97.0720% ( 16) 00:08:01.674 8.517 - 8.566: 97.0959% ( 4) 00:08:01.674 8.566 - 8.615: 97.1318% ( 6) 00:08:01.674 8.615 - 8.665: 97.1616% ( 5) 00:08:01.674 8.665 - 8.714: 97.1855% ( 4) 00:08:01.674 8.763 - 8.812: 97.1915% ( 1) 00:08:01.674 8.960 - 9.009: 97.1975% ( 1) 00:08:01.674 9.058 - 9.108: 97.2094% ( 2) 00:08:01.674 9.157 - 9.206: 97.2214% ( 2) 00:08:01.674 9.206 - 9.255: 97.2333% ( 2) 00:08:01.674 9.305 - 9.354: 97.2393% ( 1) 00:08:01.674 9.403 - 9.452: 97.2513% ( 2) 00:08:01.674 9.452 - 9.502: 97.2632% ( 2) 00:08:01.674 9.502 - 9.551: 97.2692% ( 1) 00:08:01.674 9.551 - 9.600: 97.2752% ( 1) 00:08:01.674 9.600 - 9.649: 97.2811% ( 1) 00:08:01.674 9.748 - 9.797: 97.2871% ( 1) 00:08:01.674 9.797 - 9.846: 97.2991% ( 2) 00:08:01.674 9.846 - 9.895: 97.3050% ( 1) 00:08:01.674 9.895 - 9.945: 97.3230% ( 3) 00:08:01.674 9.945 - 9.994: 97.3349% ( 2) 00:08:01.674 9.994 - 10.043: 97.3529% ( 3) 00:08:01.674 10.043 - 10.092: 97.3708% ( 3) 00:08:01.674 10.092 - 10.142: 97.3887% ( 3) 00:08:01.674 10.142 - 10.191: 97.3947% ( 1) 00:08:01.674 10.191 - 10.240: 97.4126% ( 3) 00:08:01.674 10.240 - 10.289: 97.4305% ( 3) 00:08:01.674 10.289 - 10.338: 97.4604% ( 5) 00:08:01.674 10.338 - 10.388: 97.4664% ( 1) 00:08:01.674 10.388 - 10.437: 97.4903% ( 4) 00:08:01.674 10.437 - 10.486: 97.5082% ( 3) 00:08:01.674 10.486 - 10.535: 97.5202% ( 2) 00:08:01.674 10.535 - 10.585: 97.5321% ( 2) 00:08:01.674 10.585 - 10.634: 97.5441% ( 2) 00:08:01.674 10.634 - 10.683: 97.5500% ( 1) 00:08:01.674 10.683 - 10.732: 97.5560% ( 1) 00:08:01.674 10.831 - 10.880: 97.5680% ( 2) 00:08:01.674 10.880 - 10.929: 97.5739% ( 1) 00:08:01.674 10.929 - 10.978: 97.5799% ( 1) 00:08:01.674 11.225 - 11.274: 97.5859% ( 1) 00:08:01.674 11.815 - 11.865: 97.5978% ( 2) 00:08:01.674 12.308 - 12.357: 97.6098% ( 2) 00:08:01.674 12.357 - 12.406: 97.6218% ( 2) 00:08:01.674 12.554 - 12.603: 97.6277% ( 1) 00:08:01.674 12.603 - 12.702: 97.6337% ( 1) 00:08:01.674 12.702 - 12.800: 97.6397% ( 1) 00:08:01.674 12.800 - 12.898: 97.6576% ( 3) 00:08:01.674 12.898 - 12.997: 97.6875% ( 5) 00:08:01.674 12.997 - 13.095: 97.7711% ( 14) 00:08:01.674 13.095 - 13.194: 97.8428% ( 12) 00:08:01.675 13.194 - 13.292: 97.8966% ( 9) 00:08:01.675 13.292 - 13.391: 97.9504% ( 9) 00:08:01.675 13.391 - 13.489: 97.9982% ( 8) 00:08:01.675 13.489 - 13.588: 98.0878% ( 15) 00:08:01.675 13.588 - 13.686: 98.1655% ( 13) 00:08:01.675 13.686 - 13.785: 98.2611% ( 16) 00:08:01.675 13.785 - 13.883: 98.3627% ( 17) 00:08:01.675 13.883 - 13.982: 98.4583% ( 16) 00:08:01.675 13.982 - 14.080: 98.5001% ( 7) 00:08:01.675 14.080 - 14.178: 98.5719% ( 12) 00:08:01.675 14.178 - 14.277: 98.6376% ( 11) 00:08:01.675 14.277 - 14.375: 98.6854% ( 8) 00:08:01.675 14.375 - 14.474: 98.7451% ( 10) 00:08:01.675 14.474 - 14.572: 98.8169% ( 12) 00:08:01.675 14.572 - 14.671: 98.8706% ( 9) 00:08:01.675 14.671 - 14.769: 98.9364% ( 11) 00:08:01.675 14.769 - 14.868: 98.9961% ( 10) 00:08:01.675 14.868 - 14.966: 99.0200% ( 4) 00:08:01.675 14.966 - 15.065: 99.0499% ( 5) 00:08:01.675 15.065 - 15.163: 99.0618% ( 2) 00:08:01.675 15.163 - 15.262: 99.0738% ( 2) 00:08:01.675 15.458 - 15.557: 99.0857% ( 2) 00:08:01.675 15.557 - 15.655: 99.0977% ( 2) 00:08:01.675 15.655 - 15.754: 99.1097% ( 2) 00:08:01.675 15.754 - 15.852: 99.1156% ( 1) 00:08:01.675 16.246 - 16.345: 99.1216% ( 1) 00:08:01.675 16.640 - 16.738: 99.1276% ( 1) 00:08:01.675 16.837 - 16.935: 99.1336% ( 1) 00:08:01.675 17.034 - 17.132: 99.1455% ( 2) 00:08:01.675 17.132 - 17.231: 99.1515% ( 1) 00:08:01.675 17.329 - 17.428: 99.1575% ( 1) 00:08:01.675 17.526 - 17.625: 99.1754% ( 3) 00:08:01.675 18.018 - 18.117: 99.1814% ( 1) 00:08:01.675 18.117 - 18.215: 99.1873% ( 1) 00:08:01.675 18.215 - 18.314: 99.1933% ( 1) 00:08:01.675 18.314 - 18.412: 99.1993% ( 1) 00:08:01.675 18.412 - 18.511: 99.2053% ( 1) 00:08:01.675 18.511 - 18.609: 99.2112% ( 1) 00:08:01.675 18.609 - 18.708: 99.2232% ( 2) 00:08:01.675 18.708 - 18.806: 99.2292% ( 1) 00:08:01.675 18.806 - 18.905: 99.2351% ( 1) 00:08:01.675 19.200 - 19.298: 99.2411% ( 1) 00:08:01.675 19.397 - 19.495: 99.2531% ( 2) 00:08:01.675 19.692 - 19.791: 99.2590% ( 1) 00:08:01.675 19.791 - 19.889: 99.2770% ( 3) 00:08:01.675 20.086 - 20.185: 99.2829% ( 1) 00:08:01.675 20.185 - 20.283: 99.2889% ( 1) 00:08:01.675 20.775 - 20.874: 99.3009% ( 2) 00:08:01.675 20.972 - 21.071: 99.3068% ( 1) 00:08:01.675 21.465 - 21.563: 99.3128% ( 1) 00:08:01.675 21.760 - 21.858: 99.3248% ( 2) 00:08:01.675 22.055 - 22.154: 99.3307% ( 1) 00:08:01.675 22.154 - 22.252: 99.3487% ( 3) 00:08:01.675 22.252 - 22.351: 99.3905% ( 7) 00:08:01.675 22.351 - 22.449: 99.4801% ( 15) 00:08:01.675 22.449 - 22.548: 99.6355% ( 26) 00:08:01.675 22.548 - 22.646: 99.7550% ( 20) 00:08:01.675 22.646 - 22.745: 99.8088% ( 9) 00:08:01.675 22.745 - 22.843: 99.8506% ( 7) 00:08:01.675 22.843 - 22.942: 99.8685% ( 3) 00:08:01.675 22.942 - 23.040: 99.8745% ( 1) 00:08:01.675 23.138 - 23.237: 99.8924% ( 3) 00:08:01.675 23.434 - 23.532: 99.8984% ( 1) 00:08:01.675 23.532 - 23.631: 99.9044% ( 1) 00:08:01.675 23.631 - 23.729: 99.9104% ( 1) 00:08:01.675 26.978 - 27.175: 99.9163% ( 1) 00:08:01.675 27.175 - 27.372: 99.9223% ( 1) 00:08:01.675 27.569 - 27.766: 99.9283% ( 1) 00:08:01.675 35.840 - 36.037: 99.9343% ( 1) 00:08:01.675 39.385 - 39.582: 99.9462% ( 2) 00:08:01.675 40.369 - 40.566: 99.9522% ( 1) 00:08:01.675 43.126 - 43.323: 99.9582% ( 1) 00:08:01.675 44.702 - 44.898: 99.9641% ( 1) 00:08:01.675 53.169 - 53.563: 99.9701% ( 1) 00:08:01.675 58.683 - 59.077: 99.9761% ( 1) 00:08:01.675 59.077 - 59.471: 99.9821% ( 1) 00:08:01.675 62.228 - 62.622: 99.9880% ( 1) 00:08:01.675 138.634 - 139.422: 99.9940% ( 1) 00:08:01.675 311.926 - 313.502: 100.0000% ( 1) 00:08:01.675 00:08:01.675 ************************************ 00:08:01.675 END TEST nvme_overhead 00:08:01.675 ************************************ 00:08:01.675 00:08:01.675 real 0m1.183s 00:08:01.675 user 0m1.053s 00:08:01.675 sys 0m0.083s 00:08:01.675 03:08:04 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.675 03:08:04 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:01.675 03:08:04 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:01.675 03:08:04 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:01.675 03:08:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.675 03:08:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.675 ************************************ 00:08:01.675 START TEST nvme_arbitration 00:08:01.675 ************************************ 00:08:01.675 03:08:04 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:04.958 Initializing NVMe Controllers 00:08:04.958 Attached to 0000:00:11.0 00:08:04.958 Attached to 0000:00:13.0 00:08:04.958 Attached to 0000:00:10.0 00:08:04.958 Attached to 0000:00:12.0 00:08:04.958 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:08:04.958 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:08:04.958 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:08:04.958 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:04.958 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:04.958 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:04.958 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:04.958 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:04.958 Initialization complete. Launching workers. 00:08:04.958 Starting thread on core 1 with urgent priority queue 00:08:04.958 Starting thread on core 2 with urgent priority queue 00:08:04.958 Starting thread on core 3 with urgent priority queue 00:08:04.958 Starting thread on core 0 with urgent priority queue 00:08:04.958 QEMU NVMe Ctrl (12341 ) core 0: 6698.67 IO/s 14.93 secs/100000 ios 00:08:04.958 QEMU NVMe Ctrl (12342 ) core 0: 6698.67 IO/s 14.93 secs/100000 ios 00:08:04.958 QEMU NVMe Ctrl (12343 ) core 1: 6592.00 IO/s 15.17 secs/100000 ios 00:08:04.958 QEMU NVMe Ctrl (12342 ) core 1: 6592.00 IO/s 15.17 secs/100000 ios 00:08:04.958 QEMU NVMe Ctrl (12340 ) core 2: 6165.33 IO/s 16.22 secs/100000 ios 00:08:04.958 QEMU NVMe Ctrl (12342 ) core 3: 6549.33 IO/s 15.27 secs/100000 ios 00:08:04.958 ======================================================== 00:08:04.958 00:08:04.958 00:08:04.958 real 0m3.210s 00:08:04.958 user 0m9.022s 00:08:04.958 sys 0m0.098s 00:08:04.958 03:08:08 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.958 03:08:08 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:04.958 ************************************ 00:08:04.958 END TEST nvme_arbitration 00:08:04.958 ************************************ 00:08:04.958 03:08:08 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:04.958 03:08:08 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.958 03:08:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.958 03:08:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.958 ************************************ 00:08:04.958 START TEST nvme_single_aen 00:08:04.958 ************************************ 00:08:04.958 03:08:08 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:04.958 Asynchronous Event Request test 00:08:04.958 Attached to 0000:00:11.0 00:08:04.958 Attached to 0000:00:13.0 00:08:04.958 Attached to 0000:00:10.0 00:08:04.958 Attached to 0000:00:12.0 00:08:04.958 Reset controller to setup AER completions for this process 00:08:04.958 Registering asynchronous event callbacks... 00:08:04.958 Getting orig temperature thresholds of all controllers 00:08:04.958 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.958 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.958 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.958 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:04.958 Setting all controllers temperature threshold low to trigger AER 00:08:04.958 Waiting for all controllers temperature threshold to be set lower 00:08:04.958 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.958 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:04.958 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.958 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:04.958 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.958 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:04.958 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:04.958 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:04.958 Waiting for all controllers to trigger AER and reset threshold 00:08:04.958 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.958 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.958 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.958 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.958 Cleaning up... 00:08:04.958 ************************************ 00:08:04.958 END TEST nvme_single_aen 00:08:04.958 ************************************ 00:08:04.959 00:08:04.959 real 0m0.196s 00:08:04.959 user 0m0.056s 00:08:04.959 sys 0m0.097s 00:08:04.959 03:08:08 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.959 03:08:08 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:04.959 03:08:08 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:04.959 03:08:08 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.959 03:08:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.959 03:08:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.959 ************************************ 00:08:04.959 START TEST nvme_doorbell_aers 00:08:04.959 ************************************ 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:04.959 03:08:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:05.217 [2024-11-18 03:08:08.691643] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:15.193 Executing: test_write_invalid_db 00:08:15.193 Waiting for AER completion... 00:08:15.193 Failure: test_write_invalid_db 00:08:15.193 00:08:15.193 Executing: test_invalid_db_write_overflow_sq 00:08:15.193 Waiting for AER completion... 00:08:15.193 Failure: test_invalid_db_write_overflow_sq 00:08:15.193 00:08:15.193 Executing: test_invalid_db_write_overflow_cq 00:08:15.193 Waiting for AER completion... 00:08:15.193 Failure: test_invalid_db_write_overflow_cq 00:08:15.193 00:08:15.193 03:08:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:15.193 03:08:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:15.193 [2024-11-18 03:08:18.741198] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:25.170 Executing: test_write_invalid_db 00:08:25.170 Waiting for AER completion... 00:08:25.170 Failure: test_write_invalid_db 00:08:25.170 00:08:25.170 Executing: test_invalid_db_write_overflow_sq 00:08:25.170 Waiting for AER completion... 00:08:25.170 Failure: test_invalid_db_write_overflow_sq 00:08:25.170 00:08:25.170 Executing: test_invalid_db_write_overflow_cq 00:08:25.170 Waiting for AER completion... 00:08:25.170 Failure: test_invalid_db_write_overflow_cq 00:08:25.170 00:08:25.170 03:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.170 03:08:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:25.428 [2024-11-18 03:08:28.760875] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:35.396 Executing: test_write_invalid_db 00:08:35.396 Waiting for AER completion... 00:08:35.396 Failure: test_write_invalid_db 00:08:35.396 00:08:35.396 Executing: test_invalid_db_write_overflow_sq 00:08:35.397 Waiting for AER completion... 00:08:35.397 Failure: test_invalid_db_write_overflow_sq 00:08:35.397 00:08:35.397 Executing: test_invalid_db_write_overflow_cq 00:08:35.397 Waiting for AER completion... 00:08:35.397 Failure: test_invalid_db_write_overflow_cq 00:08:35.397 00:08:35.397 03:08:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:35.397 03:08:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:35.397 [2024-11-18 03:08:38.774558] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 Executing: test_write_invalid_db 00:08:45.362 Waiting for AER completion... 00:08:45.362 Failure: test_write_invalid_db 00:08:45.362 00:08:45.362 Executing: test_invalid_db_write_overflow_sq 00:08:45.362 Waiting for AER completion... 00:08:45.362 Failure: test_invalid_db_write_overflow_sq 00:08:45.362 00:08:45.362 Executing: test_invalid_db_write_overflow_cq 00:08:45.362 Waiting for AER completion... 00:08:45.362 Failure: test_invalid_db_write_overflow_cq 00:08:45.362 00:08:45.362 00:08:45.362 real 0m40.180s 00:08:45.362 user 0m34.215s 00:08:45.362 sys 0m5.604s 00:08:45.362 03:08:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.362 03:08:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:45.362 ************************************ 00:08:45.362 END TEST nvme_doorbell_aers 00:08:45.362 ************************************ 00:08:45.362 03:08:48 nvme -- nvme/nvme.sh@97 -- # uname 00:08:45.362 03:08:48 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:45.362 03:08:48 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:45.362 03:08:48 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:45.362 03:08:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.362 03:08:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.362 ************************************ 00:08:45.362 START TEST nvme_multi_aen 00:08:45.362 ************************************ 00:08:45.362 03:08:48 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:45.362 [2024-11-18 03:08:48.823134] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.823193] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.823203] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.824231] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.824250] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.824257] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.825143] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.825166] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.825174] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.826022] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.826043] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 [2024-11-18 03:08:48.826049] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75436) is not found. Dropping the request. 00:08:45.362 Child process pid: 75957 00:08:45.620 [Child] Asynchronous Event Request test 00:08:45.620 [Child] Attached to 0000:00:11.0 00:08:45.620 [Child] Attached to 0000:00:13.0 00:08:45.620 [Child] Attached to 0000:00:10.0 00:08:45.620 [Child] Attached to 0000:00:12.0 00:08:45.620 [Child] Registering asynchronous event callbacks... 00:08:45.620 [Child] Getting orig temperature thresholds of all controllers 00:08:45.620 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:45.620 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.620 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.620 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.620 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.620 [Child] Cleaning up... 00:08:45.620 Asynchronous Event Request test 00:08:45.620 Attached to 0000:00:11.0 00:08:45.620 Attached to 0000:00:13.0 00:08:45.620 Attached to 0000:00:10.0 00:08:45.620 Attached to 0000:00:12.0 00:08:45.620 Reset controller to setup AER completions for this process 00:08:45.620 Registering asynchronous event callbacks... 00:08:45.620 Getting orig temperature thresholds of all controllers 00:08:45.620 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:45.620 Setting all controllers temperature threshold low to trigger AER 00:08:45.620 Waiting for all controllers temperature threshold to be set lower 00:08:45.620 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:45.620 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:45.620 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.620 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:45.620 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:45.621 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:45.621 Waiting for all controllers to trigger AER and reset threshold 00:08:45.621 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.621 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.621 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.621 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:45.621 Cleaning up... 00:08:45.621 00:08:45.621 real 0m0.361s 00:08:45.621 user 0m0.112s 00:08:45.621 sys 0m0.145s 00:08:45.621 03:08:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.621 03:08:49 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:45.621 ************************************ 00:08:45.621 END TEST nvme_multi_aen 00:08:45.621 ************************************ 00:08:45.621 03:08:49 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:45.621 03:08:49 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:45.621 03:08:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.621 03:08:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.621 ************************************ 00:08:45.621 START TEST nvme_startup 00:08:45.621 ************************************ 00:08:45.621 03:08:49 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:45.878 Initializing NVMe Controllers 00:08:45.878 Attached to 0000:00:11.0 00:08:45.878 Attached to 0000:00:13.0 00:08:45.878 Attached to 0000:00:10.0 00:08:45.878 Attached to 0000:00:12.0 00:08:45.878 Initialization complete. 00:08:45.878 Time used:119531.406 (us). 00:08:45.878 00:08:45.878 real 0m0.172s 00:08:45.878 user 0m0.050s 00:08:45.878 sys 0m0.075s 00:08:45.878 03:08:49 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.878 03:08:49 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:45.878 ************************************ 00:08:45.878 END TEST nvme_startup 00:08:45.878 ************************************ 00:08:45.878 03:08:49 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:45.878 03:08:49 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.878 03:08:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.878 03:08:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.878 ************************************ 00:08:45.878 START TEST nvme_multi_secondary 00:08:45.878 ************************************ 00:08:45.878 03:08:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:45.878 03:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76007 00:08:45.878 03:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:45.878 03:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76008 00:08:45.878 03:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:45.878 03:08:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:49.160 Initializing NVMe Controllers 00:08:49.160 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.160 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.160 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.160 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.160 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:49.160 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:49.160 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:49.160 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:49.160 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:49.160 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:49.160 Initialization complete. Launching workers. 00:08:49.160 ======================================================== 00:08:49.160 Latency(us) 00:08:49.160 Device Information : IOPS MiB/s Average min max 00:08:49.160 PCIE (0000:00:11.0) NSID 1 from core 2: 3198.46 12.49 5002.09 826.15 13167.45 00:08:49.160 PCIE (0000:00:13.0) NSID 1 from core 2: 3198.46 12.49 5002.30 814.75 12931.52 00:08:49.160 PCIE (0000:00:10.0) NSID 1 from core 2: 3198.46 12.49 5000.00 802.92 13018.55 00:08:49.160 PCIE (0000:00:12.0) NSID 1 from core 2: 3198.46 12.49 5002.79 822.73 12756.14 00:08:49.160 PCIE (0000:00:12.0) NSID 2 from core 2: 3198.46 12.49 5002.31 813.70 12367.28 00:08:49.160 PCIE (0000:00:12.0) NSID 3 from core 2: 3198.46 12.49 5002.64 820.74 12925.32 00:08:49.160 ======================================================== 00:08:49.160 Total : 19190.73 74.96 5002.02 802.92 13167.45 00:08:49.160 00:08:49.160 03:08:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76007 00:08:49.160 Initializing NVMe Controllers 00:08:49.160 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.160 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.160 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.160 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.160 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:49.160 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:49.160 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:49.160 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:49.160 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:49.160 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:49.160 Initialization complete. Launching workers. 00:08:49.160 ======================================================== 00:08:49.160 Latency(us) 00:08:49.160 Device Information : IOPS MiB/s Average min max 00:08:49.160 PCIE (0000:00:11.0) NSID 1 from core 1: 7543.62 29.47 2120.57 754.72 6392.35 00:08:49.160 PCIE (0000:00:13.0) NSID 1 from core 1: 7543.62 29.47 2120.62 743.42 5784.41 00:08:49.160 PCIE (0000:00:10.0) NSID 1 from core 1: 7543.62 29.47 2119.65 731.55 5944.83 00:08:49.160 PCIE (0000:00:12.0) NSID 1 from core 1: 7543.62 29.47 2120.60 748.88 5834.05 00:08:49.160 PCIE (0000:00:12.0) NSID 2 from core 1: 7543.62 29.47 2120.58 753.11 6276.41 00:08:49.160 PCIE (0000:00:12.0) NSID 3 from core 1: 7543.62 29.47 2120.57 755.42 6136.34 00:08:49.160 ======================================================== 00:08:49.160 Total : 45261.70 176.80 2120.43 731.55 6392.35 00:08:49.160 00:08:51.062 Initializing NVMe Controllers 00:08:51.062 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.062 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.062 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.062 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.062 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:51.062 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:51.062 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:51.062 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:51.062 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:51.062 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:51.062 Initialization complete. Launching workers. 00:08:51.062 ======================================================== 00:08:51.062 Latency(us) 00:08:51.062 Device Information : IOPS MiB/s Average min max 00:08:51.062 PCIE (0000:00:11.0) NSID 1 from core 0: 10985.00 42.91 1456.13 710.55 6845.19 00:08:51.062 PCIE (0000:00:13.0) NSID 1 from core 0: 10985.00 42.91 1456.06 723.97 6708.44 00:08:51.062 PCIE (0000:00:10.0) NSID 1 from core 0: 10985.00 42.91 1455.14 685.04 7015.01 00:08:51.062 PCIE (0000:00:12.0) NSID 1 from core 0: 10985.00 42.91 1455.90 627.05 6779.05 00:08:51.062 PCIE (0000:00:12.0) NSID 2 from core 0: 10985.00 42.91 1455.82 474.88 6763.94 00:08:51.062 PCIE (0000:00:12.0) NSID 3 from core 0: 10985.00 42.91 1455.74 384.66 6694.45 00:08:51.062 ======================================================== 00:08:51.062 Total : 65910.01 257.46 1455.80 384.66 7015.01 00:08:51.062 00:08:51.321 03:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76008 00:08:51.321 03:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76077 00:08:51.321 03:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:51.321 03:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76079 00:08:51.321 03:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:51.321 03:08:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:54.606 Initializing NVMe Controllers 00:08:54.606 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.606 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.606 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.606 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.606 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:54.606 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:54.606 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:54.606 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:54.606 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:54.606 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:54.606 Initialization complete. Launching workers. 00:08:54.606 ======================================================== 00:08:54.606 Latency(us) 00:08:54.606 Device Information : IOPS MiB/s Average min max 00:08:54.606 PCIE (0000:00:11.0) NSID 1 from core 0: 7615.23 29.75 2100.64 789.88 6120.20 00:08:54.606 PCIE (0000:00:13.0) NSID 1 from core 0: 7615.23 29.75 2100.71 780.77 6068.56 00:08:54.606 PCIE (0000:00:10.0) NSID 1 from core 0: 7615.23 29.75 2099.88 760.93 5921.89 00:08:54.606 PCIE (0000:00:12.0) NSID 1 from core 0: 7615.23 29.75 2100.74 761.99 6510.56 00:08:54.606 PCIE (0000:00:12.0) NSID 2 from core 0: 7615.23 29.75 2100.76 801.71 6319.19 00:08:54.606 PCIE (0000:00:12.0) NSID 3 from core 0: 7615.23 29.75 2100.96 796.35 6494.47 00:08:54.606 ======================================================== 00:08:54.606 Total : 45691.35 178.48 2100.61 760.93 6510.56 00:08:54.606 00:08:54.606 Initializing NVMe Controllers 00:08:54.606 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.606 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.606 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.606 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.606 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:54.606 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:54.606 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:54.606 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:54.606 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:54.606 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:54.606 Initialization complete. Launching workers. 00:08:54.606 ======================================================== 00:08:54.606 Latency(us) 00:08:54.606 Device Information : IOPS MiB/s Average min max 00:08:54.606 PCIE (0000:00:11.0) NSID 1 from core 1: 7503.06 29.31 2132.03 749.19 6232.84 00:08:54.606 PCIE (0000:00:13.0) NSID 1 from core 1: 7503.06 29.31 2132.08 754.72 6462.58 00:08:54.606 PCIE (0000:00:10.0) NSID 1 from core 1: 7503.06 29.31 2131.20 745.29 5927.72 00:08:54.606 PCIE (0000:00:12.0) NSID 1 from core 1: 7503.06 29.31 2132.13 768.42 6190.87 00:08:54.606 PCIE (0000:00:12.0) NSID 2 from core 1: 7503.06 29.31 2132.11 764.22 6031.71 00:08:54.606 PCIE (0000:00:12.0) NSID 3 from core 1: 7503.06 29.31 2132.11 779.29 6326.76 00:08:54.606 ======================================================== 00:08:54.606 Total : 45018.34 175.85 2131.94 745.29 6462.58 00:08:54.606 00:08:56.506 Initializing NVMe Controllers 00:08:56.506 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.506 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.506 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.506 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.506 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:56.506 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:56.506 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:56.506 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:56.506 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:56.506 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:56.506 Initialization complete. Launching workers. 00:08:56.506 ======================================================== 00:08:56.506 Latency(us) 00:08:56.506 Device Information : IOPS MiB/s Average min max 00:08:56.506 PCIE (0000:00:11.0) NSID 1 from core 2: 4608.62 18.00 3470.68 771.20 12696.32 00:08:56.506 PCIE (0000:00:13.0) NSID 1 from core 2: 4608.62 18.00 3470.95 769.24 12384.77 00:08:56.506 PCIE (0000:00:10.0) NSID 1 from core 2: 4608.62 18.00 3470.26 750.54 12089.51 00:08:56.506 PCIE (0000:00:12.0) NSID 1 from core 2: 4608.62 18.00 3471.16 690.00 14181.37 00:08:56.506 PCIE (0000:00:12.0) NSID 2 from core 2: 4608.62 18.00 3470.39 494.95 12853.64 00:08:56.506 PCIE (0000:00:12.0) NSID 3 from core 2: 4608.62 18.00 3470.49 387.85 12778.04 00:08:56.506 ======================================================== 00:08:56.506 Total : 27651.75 108.01 3470.66 387.85 14181.37 00:08:56.506 00:08:56.506 ************************************ 00:08:56.506 END TEST nvme_multi_secondary 00:08:56.506 ************************************ 00:08:56.506 03:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76077 00:08:56.506 03:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76079 00:08:56.506 00:08:56.506 real 0m10.752s 00:08:56.506 user 0m18.284s 00:08:56.506 sys 0m0.542s 00:08:56.506 03:09:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.506 03:09:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:56.506 03:09:00 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:56.506 03:09:00 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:56.506 03:09:00 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75034 ]] 00:08:56.506 03:09:00 nvme -- common/autotest_common.sh@1090 -- # kill 75034 00:08:56.506 03:09:00 nvme -- common/autotest_common.sh@1091 -- # wait 75034 00:08:56.506 [2024-11-18 03:09:00.074712] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.074792] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.074809] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.074825] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.075393] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.075425] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.075438] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.075453] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.075988] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076025] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076038] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076056] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076671] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076713] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076726] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.506 [2024-11-18 03:09:00.076745] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75956) is not found. Dropping the request. 00:08:56.765 03:09:00 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:56.765 03:09:00 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:56.765 03:09:00 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:56.765 03:09:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.765 03:09:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.765 03:09:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.765 ************************************ 00:08:56.765 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:56.765 ************************************ 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:56.765 * Looking for test storage... 00:08:56.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:56.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.765 --rc genhtml_branch_coverage=1 00:08:56.765 --rc genhtml_function_coverage=1 00:08:56.765 --rc genhtml_legend=1 00:08:56.765 --rc geninfo_all_blocks=1 00:08:56.765 --rc geninfo_unexecuted_blocks=1 00:08:56.765 00:08:56.765 ' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:56.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.765 --rc genhtml_branch_coverage=1 00:08:56.765 --rc genhtml_function_coverage=1 00:08:56.765 --rc genhtml_legend=1 00:08:56.765 --rc geninfo_all_blocks=1 00:08:56.765 --rc geninfo_unexecuted_blocks=1 00:08:56.765 00:08:56.765 ' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:56.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.765 --rc genhtml_branch_coverage=1 00:08:56.765 --rc genhtml_function_coverage=1 00:08:56.765 --rc genhtml_legend=1 00:08:56.765 --rc geninfo_all_blocks=1 00:08:56.765 --rc geninfo_unexecuted_blocks=1 00:08:56.765 00:08:56.765 ' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:56.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.765 --rc genhtml_branch_coverage=1 00:08:56.765 --rc genhtml_function_coverage=1 00:08:56.765 --rc genhtml_legend=1 00:08:56.765 --rc geninfo_all_blocks=1 00:08:56.765 --rc geninfo_unexecuted_blocks=1 00:08:56.765 00:08:56.765 ' 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.765 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76241 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76241 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76241 ']' 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.026 03:09:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.026 [2024-11-18 03:09:00.448989] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.026 [2024-11-18 03:09:00.449107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76241 ] 00:08:57.287 [2024-11-18 03:09:00.607598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.287 [2024-11-18 03:09:00.643012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.287 [2024-11-18 03:09:00.643398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.287 [2024-11-18 03:09:00.643525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.287 [2024-11-18 03:09:00.643564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.851 nvme0n1 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_kmWv0.txt 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.851 true 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731899341 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76264 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:57.851 03:09:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.386 [2024-11-18 03:09:03.373679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:00.386 [2024-11-18 03:09:03.373917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:00.386 [2024-11-18 03:09:03.373939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:00.386 [2024-11-18 03:09:03.373953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.386 [2024-11-18 03:09:03.375686] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.386 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76264 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76264 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76264 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_kmWv0.txt 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:00.386 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_kmWv0.txt 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76241 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76241 ']' 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76241 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76241 00:09:00.387 killing process with pid 76241 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76241' 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76241 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76241 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:00.387 ************************************ 00:09:00.387 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:00.387 ************************************ 00:09:00.387 00:09:00.387 real 0m3.568s 00:09:00.387 user 0m12.700s 00:09:00.387 sys 0m0.444s 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.387 03:09:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.387 03:09:03 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:00.387 03:09:03 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:00.387 03:09:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.387 03:09:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.387 03:09:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.387 ************************************ 00:09:00.387 START TEST nvme_fio 00:09:00.387 ************************************ 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:00.387 03:09:03 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.387 03:09:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.645 03:09:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.645 03:09:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.904 03:09:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.904 03:09:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.904 03:09:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:00.904 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.904 fio-3.35 00:09:00.904 Starting 1 thread 00:09:06.183 00:09:06.183 test: (groupid=0, jobs=1): err= 0: pid=76391: Mon Nov 18 03:09:09 2024 00:09:06.183 read: IOPS=21.4k, BW=83.6MiB/s (87.6MB/s)(167MiB/2001msec) 00:09:06.183 slat (nsec): min=3279, max=71208, avg=5079.98, stdev=2448.09 00:09:06.183 clat (usec): min=825, max=12223, avg=2988.88, stdev=1017.22 00:09:06.183 lat (usec): min=830, max=12294, avg=2993.96, stdev=1018.45 00:09:06.183 clat percentiles (usec): 00:09:06.183 | 1.00th=[ 1729], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:06.183 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2769], 00:09:06.183 | 70.00th=[ 2999], 80.00th=[ 3490], 90.00th=[ 4490], 95.00th=[ 5276], 00:09:06.183 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[ 9503], 00:09:06.183 | 99.99th=[12125] 00:09:06.183 bw ( KiB/s): min=84240, max=85608, per=99.23%, avg=84922.67, stdev=684.00, samples=3 00:09:06.183 iops : min=21060, max=21402, avg=21230.67, stdev=171.00, samples=3 00:09:06.183 write: IOPS=21.2k, BW=82.9MiB/s (87.0MB/s)(166MiB/2001msec); 0 zone resets 00:09:06.183 slat (nsec): min=3337, max=55494, avg=5238.81, stdev=2346.59 00:09:06.183 clat (usec): min=834, max=12150, avg=2997.84, stdev=1011.81 00:09:06.183 lat (usec): min=838, max=12165, avg=3003.07, stdev=1013.00 00:09:06.183 clat percentiles (usec): 00:09:06.183 | 1.00th=[ 1745], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:06.183 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2802], 00:09:06.183 | 70.00th=[ 3032], 80.00th=[ 3490], 90.00th=[ 4490], 95.00th=[ 5276], 00:09:06.183 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 8356], 99.95th=[ 9503], 00:09:06.183 | 99.99th=[11994] 00:09:06.183 bw ( KiB/s): min=84368, max=85368, per=100.00%, avg=85029.33, stdev=572.79, samples=3 00:09:06.183 iops : min=21092, max=21342, avg=21257.33, stdev=143.20, samples=3 00:09:06.183 lat (usec) : 1000=0.02% 00:09:06.183 lat (msec) : 2=2.90%, 4=82.71%, 10=14.35%, 20=0.03% 00:09:06.183 cpu : usr=99.15%, sys=0.10%, ctx=2, majf=0, minf=627 00:09:06.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:06.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.183 issued rwts: total=42810,42487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.183 00:09:06.183 Run status group 0 (all jobs): 00:09:06.183 READ: bw=83.6MiB/s (87.6MB/s), 83.6MiB/s-83.6MiB/s (87.6MB/s-87.6MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:06.183 WRITE: bw=82.9MiB/s (87.0MB/s), 82.9MiB/s-82.9MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:06.183 ----------------------------------------------------- 00:09:06.183 Suppressions used: 00:09:06.183 count bytes template 00:09:06.183 1 32 /usr/src/fio/parse.c 00:09:06.183 1 8 libtcmalloc_minimal.so 00:09:06.183 ----------------------------------------------------- 00:09:06.183 00:09:06.183 03:09:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:06.183 03:09:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:06.183 03:09:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:06.183 03:09:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.443 03:09:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:06.443 03:09:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.703 03:09:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.703 03:09:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:06.703 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:06.703 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:06.703 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.703 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.704 03:09:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:06.704 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.704 fio-3.35 00:09:06.704 Starting 1 thread 00:09:13.278 00:09:13.278 test: (groupid=0, jobs=1): err= 0: pid=76448: Mon Nov 18 03:09:15 2024 00:09:13.278 read: IOPS=19.7k, BW=77.1MiB/s (80.9MB/s)(154MiB/2001msec) 00:09:13.278 slat (usec): min=4, max=204, avg= 5.46, stdev= 2.76 00:09:13.278 clat (usec): min=1068, max=10316, avg=3219.52, stdev=1062.67 00:09:13.278 lat (usec): min=1073, max=10352, avg=3224.98, stdev=1063.82 00:09:13.278 clat percentiles (usec): 00:09:13.278 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:13.278 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2999], 00:09:13.278 | 70.00th=[ 3195], 80.00th=[ 3621], 90.00th=[ 4883], 95.00th=[ 5669], 00:09:13.278 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 7963], 99.95th=[ 8848], 00:09:13.278 | 99.99th=[ 9765] 00:09:13.278 bw ( KiB/s): min=75080, max=80080, per=97.30%, avg=76824.00, stdev=2822.16, samples=3 00:09:13.278 iops : min=18770, max=20020, avg=19206.00, stdev=705.54, samples=3 00:09:13.278 write: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(154MiB/2001msec); 0 zone resets 00:09:13.278 slat (nsec): min=4293, max=72203, avg=5621.53, stdev=2631.76 00:09:13.278 clat (usec): min=1088, max=9837, avg=3252.36, stdev=1079.14 00:09:13.278 lat (usec): min=1093, max=9847, avg=3257.98, stdev=1080.28 00:09:13.278 clat percentiles (usec): 00:09:13.278 | 1.00th=[ 2057], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:13.278 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3032], 00:09:13.278 | 70.00th=[ 3228], 80.00th=[ 3654], 90.00th=[ 4948], 95.00th=[ 5735], 00:09:13.278 | 99.00th=[ 7046], 99.50th=[ 7570], 99.90th=[ 8225], 99.95th=[ 8979], 00:09:13.278 | 99.99th=[ 9765] 00:09:13.278 bw ( KiB/s): min=75248, max=79992, per=97.63%, avg=76933.33, stdev=2653.47, samples=3 00:09:13.278 iops : min=18812, max=19998, avg=19233.33, stdev=663.37, samples=3 00:09:13.278 lat (msec) : 2=0.83%, 4=82.51%, 10=16.66%, 20=0.01% 00:09:13.278 cpu : usr=98.60%, sys=0.30%, ctx=4, majf=0, minf=626 00:09:13.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.278 issued rwts: total=39499,39422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.278 00:09:13.278 Run status group 0 (all jobs): 00:09:13.278 READ: bw=77.1MiB/s (80.9MB/s), 77.1MiB/s-77.1MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:09:13.278 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:13.278 ----------------------------------------------------- 00:09:13.278 Suppressions used: 00:09:13.278 count bytes template 00:09:13.278 1 32 /usr/src/fio/parse.c 00:09:13.278 1 8 libtcmalloc_minimal.so 00:09:13.278 ----------------------------------------------------- 00:09:13.278 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.278 03:09:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.278 03:09:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:13.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.278 fio-3.35 00:09:13.278 Starting 1 thread 00:09:19.913 00:09:19.913 test: (groupid=0, jobs=1): err= 0: pid=76508: Mon Nov 18 03:09:22 2024 00:09:19.913 read: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(143MiB/2001msec) 00:09:19.913 slat (nsec): min=3448, max=88322, avg=5861.73, stdev=3220.15 00:09:19.913 clat (usec): min=432, max=13908, avg=3473.44, stdev=1256.33 00:09:19.913 lat (usec): min=437, max=13974, avg=3479.30, stdev=1257.81 00:09:19.913 clat percentiles (usec): 00:09:19.913 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:19.913 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3195], 00:09:19.913 | 70.00th=[ 3589], 80.00th=[ 4359], 90.00th=[ 5342], 95.00th=[ 6128], 00:09:19.913 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[ 9241], 99.95th=[10945], 00:09:19.913 | 99.99th=[13698] 00:09:19.913 bw ( KiB/s): min=62704, max=76552, per=98.21%, avg=71805.00, stdev=7884.15, samples=3 00:09:19.913 iops : min=15676, max=19138, avg=17951.00, stdev=1970.83, samples=3 00:09:19.913 write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(143MiB/2001msec); 0 zone resets 00:09:19.913 slat (nsec): min=3615, max=82489, avg=6033.21, stdev=3265.75 00:09:19.913 clat (usec): min=365, max=13807, avg=3503.56, stdev=1263.01 00:09:19.913 lat (usec): min=370, max=13827, avg=3509.59, stdev=1264.51 00:09:19.913 clat percentiles (usec): 00:09:19.913 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:19.913 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 3032], 60.00th=[ 3228], 00:09:19.913 | 70.00th=[ 3621], 80.00th=[ 4359], 90.00th=[ 5407], 95.00th=[ 6128], 00:09:19.913 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[10159], 99.95th=[11076], 00:09:19.913 | 99.99th=[11863] 00:09:19.913 bw ( KiB/s): min=63088, max=76143, per=98.17%, avg=71762.33, stdev=7512.32, samples=3 00:09:19.913 iops : min=15772, max=19035, avg=17940.33, stdev=1877.86, samples=3 00:09:19.913 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:09:19.913 lat (msec) : 2=0.92%, 4=74.83%, 10=24.12%, 20=0.09% 00:09:19.913 cpu : usr=98.75%, sys=0.20%, ctx=16, majf=0, minf=627 00:09:19.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:19.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.913 issued rwts: total=36576,36570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.913 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.913 00:09:19.913 Run status group 0 (all jobs): 00:09:19.913 READ: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=143MiB (150MB), run=2001-2001msec 00:09:19.913 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=143MiB (150MB), run=2001-2001msec 00:09:19.913 ----------------------------------------------------- 00:09:19.913 Suppressions used: 00:09:19.913 count bytes template 00:09:19.913 1 32 /usr/src/fio/parse.c 00:09:19.913 1 8 libtcmalloc_minimal.so 00:09:19.913 ----------------------------------------------------- 00:09:19.913 00:09:19.913 03:09:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:19.913 03:09:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:19.913 03:09:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:19.913 03:09:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:19.913 03:09:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:19.913 03:09:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:19.913 03:09:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:19.913 03:09:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:19.913 03:09:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:19.913 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:19.913 fio-3.35 00:09:19.913 Starting 1 thread 00:09:25.199 00:09:25.199 test: (groupid=0, jobs=1): err= 0: pid=76569: Mon Nov 18 03:09:27 2024 00:09:25.199 read: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec) 00:09:25.199 slat (nsec): min=4238, max=75387, avg=5277.63, stdev=2550.68 00:09:25.199 clat (usec): min=224, max=12863, avg=3046.91, stdev=1070.91 00:09:25.199 lat (usec): min=229, max=12928, avg=3052.19, stdev=1072.14 00:09:25.199 clat percentiles (usec): 00:09:25.199 | 1.00th=[ 1893], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:25.199 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2737], 00:09:25.199 | 70.00th=[ 2966], 80.00th=[ 3556], 90.00th=[ 4817], 95.00th=[ 5473], 00:09:25.199 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7767], 99.95th=[10552], 00:09:25.199 | 99.99th=[12387] 00:09:25.199 bw ( KiB/s): min=74944, max=83904, per=94.38%, avg=79064.00, stdev=4523.18, samples=3 00:09:25.199 iops : min=18736, max=20976, avg=19766.00, stdev=1130.80, samples=3 00:09:25.199 write: IOPS=20.8k, BW=81.4MiB/s (85.3MB/s)(163MiB/2001msec); 0 zone resets 00:09:25.199 slat (nsec): min=4314, max=73508, avg=5469.29, stdev=2510.03 00:09:25.199 clat (usec): min=242, max=12394, avg=3058.69, stdev=1068.78 00:09:25.199 lat (usec): min=247, max=12408, avg=3064.16, stdev=1070.02 00:09:25.199 clat percentiles (usec): 00:09:25.199 | 1.00th=[ 1909], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:25.199 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2769], 00:09:25.199 | 70.00th=[ 2966], 80.00th=[ 3523], 90.00th=[ 4817], 95.00th=[ 5473], 00:09:25.199 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 8586], 99.95th=[10552], 00:09:25.199 | 99.99th=[12256] 00:09:25.199 bw ( KiB/s): min=74872, max=84288, per=94.94%, avg=79125.33, stdev=4773.41, samples=3 00:09:25.199 iops : min=18718, max=21072, avg=19781.33, stdev=1193.35, samples=3 00:09:25.199 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:25.199 lat (msec) : 2=1.35%, 4=82.12%, 10=16.43%, 20=0.06% 00:09:25.199 cpu : usr=99.10%, sys=0.05%, ctx=4, majf=0, minf=625 00:09:25.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:25.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.199 issued rwts: total=41909,41694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.199 00:09:25.199 Run status group 0 (all jobs): 00:09:25.199 READ: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:25.199 WRITE: bw=81.4MiB/s (85.3MB/s), 81.4MiB/s-81.4MiB/s (85.3MB/s-85.3MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:25.199 ----------------------------------------------------- 00:09:25.199 Suppressions used: 00:09:25.199 count bytes template 00:09:25.199 1 32 /usr/src/fio/parse.c 00:09:25.199 1 8 libtcmalloc_minimal.so 00:09:25.199 ----------------------------------------------------- 00:09:25.199 00:09:25.199 03:09:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:25.199 03:09:28 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:25.199 00:09:25.199 real 0m24.386s 00:09:25.199 user 0m18.507s 00:09:25.199 sys 0m8.167s 00:09:25.199 03:09:28 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.199 ************************************ 00:09:25.199 END TEST nvme_fio 00:09:25.199 ************************************ 00:09:25.199 03:09:28 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 00:09:25.199 real 1m32.935s 00:09:25.199 user 3m35.244s 00:09:25.199 sys 0m18.412s 00:09:25.199 03:09:28 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.199 03:09:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 ************************************ 00:09:25.199 END TEST nvme 00:09:25.199 ************************************ 00:09:25.199 03:09:28 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:25.199 03:09:28 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:25.199 03:09:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:25.199 03:09:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.199 03:09:28 -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 ************************************ 00:09:25.199 START TEST nvme_scc 00:09:25.199 ************************************ 00:09:25.199 03:09:28 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:25.199 * Looking for test storage... 00:09:25.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.199 03:09:28 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:25.199 03:09:28 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:25.199 03:09:28 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:25.199 03:09:28 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.199 03:09:28 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.200 03:09:28 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:25.200 03:09:28 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.200 03:09:28 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:25.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.200 --rc genhtml_branch_coverage=1 00:09:25.200 --rc genhtml_function_coverage=1 00:09:25.200 --rc genhtml_legend=1 00:09:25.200 --rc geninfo_all_blocks=1 00:09:25.200 --rc geninfo_unexecuted_blocks=1 00:09:25.200 00:09:25.200 ' 00:09:25.200 03:09:28 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:25.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.200 --rc genhtml_branch_coverage=1 00:09:25.200 --rc genhtml_function_coverage=1 00:09:25.200 --rc genhtml_legend=1 00:09:25.200 --rc geninfo_all_blocks=1 00:09:25.200 --rc geninfo_unexecuted_blocks=1 00:09:25.200 00:09:25.200 ' 00:09:25.200 03:09:28 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:25.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.200 --rc genhtml_branch_coverage=1 00:09:25.200 --rc genhtml_function_coverage=1 00:09:25.200 --rc genhtml_legend=1 00:09:25.200 --rc geninfo_all_blocks=1 00:09:25.200 --rc geninfo_unexecuted_blocks=1 00:09:25.200 00:09:25.200 ' 00:09:25.200 03:09:28 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:25.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.200 --rc genhtml_branch_coverage=1 00:09:25.200 --rc genhtml_function_coverage=1 00:09:25.200 --rc genhtml_legend=1 00:09:25.200 --rc geninfo_all_blocks=1 00:09:25.200 --rc geninfo_unexecuted_blocks=1 00:09:25.200 00:09:25.200 ' 00:09:25.200 03:09:28 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.200 03:09:28 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.200 03:09:28 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.200 03:09:28 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.200 03:09:28 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.200 03:09:28 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.200 03:09:28 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.200 03:09:28 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.200 03:09:28 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:25.200 03:09:28 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:25.200 03:09:28 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:25.200 03:09:28 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.200 03:09:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:25.200 03:09:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:25.200 03:09:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:25.200 03:09:28 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:25.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.458 Waiting for block devices as requested 00:09:25.458 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.458 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.458 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.458 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:30.747 03:09:34 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:30.747 03:09:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.747 03:09:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:30.747 03:09:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.747 03:09:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:30.747 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:30.748 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.749 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.750 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.751 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:30.752 03:09:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.752 03:09:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:30.752 03:09:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.752 03:09:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.752 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.753 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:30.754 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.755 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.756 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:30.757 03:09:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.757 03:09:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:30.757 03:09:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.757 03:09:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:30.757 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:30.758 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.759 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.760 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:30.761 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:30.762 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:30.763 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.764 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:31.025 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:31.026 03:09:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:31.026 03:09:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:31.026 03:09:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.026 03:09:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:31.026 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.027 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:31.028 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:31.029 03:09:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:31.029 03:09:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:31.029 03:09:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.854 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.854 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.854 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.112 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.112 03:09:35 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:32.112 03:09:35 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:32.112 03:09:35 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.112 03:09:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 ************************************ 00:09:32.112 START TEST nvme_simple_copy 00:09:32.112 ************************************ 00:09:32.112 03:09:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:32.373 Initializing NVMe Controllers 00:09:32.373 Attaching to 0000:00:10.0 00:09:32.373 Controller supports SCC. Attached to 0000:00:10.0 00:09:32.373 Namespace ID: 1 size: 6GB 00:09:32.373 Initialization complete. 00:09:32.373 00:09:32.373 Controller QEMU NVMe Ctrl (12340 ) 00:09:32.373 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:32.373 Namespace Block Size:4096 00:09:32.373 Writing LBAs 0 to 63 with Random Data 00:09:32.373 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:32.373 LBAs matching Written Data: 64 00:09:32.373 00:09:32.373 real 0m0.231s 00:09:32.373 user 0m0.074s 00:09:32.373 sys 0m0.056s 00:09:32.373 03:09:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.373 03:09:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.373 ************************************ 00:09:32.373 END TEST nvme_simple_copy 00:09:32.373 ************************************ 00:09:32.373 00:09:32.373 real 0m7.512s 00:09:32.373 user 0m0.998s 00:09:32.373 sys 0m1.329s 00:09:32.373 03:09:35 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.373 ************************************ 00:09:32.373 03:09:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:32.373 END TEST nvme_scc 00:09:32.373 ************************************ 00:09:32.373 03:09:35 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:32.373 03:09:35 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:32.373 03:09:35 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:32.373 03:09:35 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:32.373 03:09:35 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:32.373 03:09:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.373 03:09:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.373 03:09:35 -- common/autotest_common.sh@10 -- # set +x 00:09:32.373 ************************************ 00:09:32.373 START TEST nvme_fdp 00:09:32.373 ************************************ 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:32.373 * Looking for test storage... 00:09:32.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.373 --rc genhtml_branch_coverage=1 00:09:32.373 --rc genhtml_function_coverage=1 00:09:32.373 --rc genhtml_legend=1 00:09:32.373 --rc geninfo_all_blocks=1 00:09:32.373 --rc geninfo_unexecuted_blocks=1 00:09:32.373 00:09:32.373 ' 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.373 --rc genhtml_branch_coverage=1 00:09:32.373 --rc genhtml_function_coverage=1 00:09:32.373 --rc genhtml_legend=1 00:09:32.373 --rc geninfo_all_blocks=1 00:09:32.373 --rc geninfo_unexecuted_blocks=1 00:09:32.373 00:09:32.373 ' 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.373 --rc genhtml_branch_coverage=1 00:09:32.373 --rc genhtml_function_coverage=1 00:09:32.373 --rc genhtml_legend=1 00:09:32.373 --rc geninfo_all_blocks=1 00:09:32.373 --rc geninfo_unexecuted_blocks=1 00:09:32.373 00:09:32.373 ' 00:09:32.373 03:09:35 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.373 --rc genhtml_branch_coverage=1 00:09:32.373 --rc genhtml_function_coverage=1 00:09:32.373 --rc genhtml_legend=1 00:09:32.373 --rc geninfo_all_blocks=1 00:09:32.373 --rc geninfo_unexecuted_blocks=1 00:09:32.373 00:09:32.373 ' 00:09:32.373 03:09:35 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.373 03:09:35 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.373 03:09:35 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:32.373 03:09:35 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:32.373 03:09:35 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.373 03:09:35 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.373 03:09:35 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.373 03:09:35 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 03:09:35 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 03:09:35 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:32.374 03:09:35 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:32.374 03:09:35 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:32.374 03:09:35 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.374 03:09:35 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.897 Waiting for block devices as requested 00:09:32.897 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.897 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.158 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.158 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.459 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.459 03:09:41 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:38.459 03:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.459 03:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:38.459 03:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.459 03:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:38.459 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:38.460 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.461 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.462 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:38.463 03:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.463 03:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:38.463 03:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.463 03:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:38.463 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.464 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:38.465 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:38.468 03:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.468 03:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:38.468 03:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.468 03:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.472 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:38.473 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.474 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:38.476 03:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.476 03:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:38.476 03:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.476 03:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.476 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.479 03:09:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:38.479 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:38.480 03:09:41 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:38.480 03:09:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:38.480 03:09:41 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:38.480 03:09:41 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.306 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.306 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.306 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.564 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.564 03:09:42 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.564 03:09:42 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:39.564 03:09:42 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.564 03:09:42 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.564 ************************************ 00:09:39.564 START TEST nvme_flexible_data_placement 00:09:39.564 ************************************ 00:09:39.564 03:09:42 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.823 Initializing NVMe Controllers 00:09:39.823 Attaching to 0000:00:13.0 00:09:39.823 Controller supports FDP Attached to 0000:00:13.0 00:09:39.823 Namespace ID: 1 Endurance Group ID: 1 00:09:39.823 Initialization complete. 00:09:39.823 00:09:39.823 ================================== 00:09:39.823 == FDP tests for Namespace: #01 == 00:09:39.823 ================================== 00:09:39.823 00:09:39.823 Get Feature: FDP: 00:09:39.823 ================= 00:09:39.823 Enabled: Yes 00:09:39.823 FDP configuration Index: 0 00:09:39.823 00:09:39.823 FDP configurations log page 00:09:39.823 =========================== 00:09:39.823 Number of FDP configurations: 1 00:09:39.823 Version: 0 00:09:39.823 Size: 112 00:09:39.823 FDP Configuration Descriptor: 0 00:09:39.823 Descriptor Size: 96 00:09:39.823 Reclaim Group Identifier format: 2 00:09:39.823 FDP Volatile Write Cache: Not Present 00:09:39.823 FDP Configuration: Valid 00:09:39.823 Vendor Specific Size: 0 00:09:39.823 Number of Reclaim Groups: 2 00:09:39.823 Number of Recalim Unit Handles: 8 00:09:39.823 Max Placement Identifiers: 128 00:09:39.823 Number of Namespaces Suppprted: 256 00:09:39.823 Reclaim unit Nominal Size: 6000000 bytes 00:09:39.823 Estimated Reclaim Unit Time Limit: Not Reported 00:09:39.823 RUH Desc #000: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #001: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #002: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #003: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #004: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #005: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #006: RUH Type: Initially Isolated 00:09:39.823 RUH Desc #007: RUH Type: Initially Isolated 00:09:39.823 00:09:39.823 FDP reclaim unit handle usage log page 00:09:39.823 ====================================== 00:09:39.823 Number of Reclaim Unit Handles: 8 00:09:39.823 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:39.823 RUH Usage Desc #001: RUH Attributes: Unused 00:09:39.823 RUH Usage Desc #002: RUH Attributes: Unused 00:09:39.824 RUH Usage Desc #003: RUH Attributes: Unused 00:09:39.824 RUH Usage Desc #004: RUH Attributes: Unused 00:09:39.824 RUH Usage Desc #005: RUH Attributes: Unused 00:09:39.824 RUH Usage Desc #006: RUH Attributes: Unused 00:09:39.824 RUH Usage Desc #007: RUH Attributes: Unused 00:09:39.824 00:09:39.824 FDP statistics log page 00:09:39.824 ======================= 00:09:39.824 Host bytes with metadata written: 2166263808 00:09:39.824 Media bytes with metadata written: 2167451648 00:09:39.824 Media bytes erased: 0 00:09:39.824 00:09:39.824 FDP Reclaim unit handle status 00:09:39.824 ============================== 00:09:39.824 Number of RUHS descriptors: 2 00:09:39.824 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002e17 00:09:39.824 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:39.824 00:09:39.824 FDP write on placement id: 0 success 00:09:39.824 00:09:39.824 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:39.824 00:09:39.824 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:39.824 00:09:39.824 Get Feature: FDP Events for Placement handle: #0 00:09:39.824 ======================== 00:09:39.824 Number of FDP Events: 6 00:09:39.824 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:39.824 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:39.824 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:39.824 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:39.824 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:39.824 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:39.824 00:09:39.824 FDP events log page 00:09:39.824 =================== 00:09:39.824 Number of FDP events: 1 00:09:39.824 FDP Event #0: 00:09:39.824 Event Type: RU Not Written to Capacity 00:09:39.824 Placement Identifier: Valid 00:09:39.824 NSID: Valid 00:09:39.824 Location: Valid 00:09:39.824 Placement Identifier: 0 00:09:39.824 Event Timestamp: 2 00:09:39.824 Namespace Identifier: 1 00:09:39.824 Reclaim Group Identifier: 0 00:09:39.824 Reclaim Unit Handle Identifier: 0 00:09:39.824 00:09:39.824 FDP test passed 00:09:39.824 00:09:39.824 real 0m0.214s 00:09:39.824 user 0m0.055s 00:09:39.824 sys 0m0.056s 00:09:39.824 03:09:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.824 ************************************ 00:09:39.824 END TEST nvme_flexible_data_placement 00:09:39.824 ************************************ 00:09:39.824 03:09:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:39.824 00:09:39.824 real 0m7.450s 00:09:39.824 user 0m0.998s 00:09:39.824 sys 0m1.252s 00:09:39.824 03:09:43 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.824 ************************************ 00:09:39.824 END TEST nvme_fdp 00:09:39.824 ************************************ 00:09:39.824 03:09:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.824 03:09:43 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:39.824 03:09:43 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.824 03:09:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.824 03:09:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.824 03:09:43 -- common/autotest_common.sh@10 -- # set +x 00:09:39.824 ************************************ 00:09:39.824 START TEST nvme_rpc 00:09:39.824 ************************************ 00:09:39.824 03:09:43 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.824 * Looking for test storage... 00:09:39.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.824 03:09:43 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:39.824 03:09:43 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:39.824 03:09:43 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:40.082 03:09:43 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.083 03:09:43 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.083 --rc genhtml_branch_coverage=1 00:09:40.083 --rc genhtml_function_coverage=1 00:09:40.083 --rc genhtml_legend=1 00:09:40.083 --rc geninfo_all_blocks=1 00:09:40.083 --rc geninfo_unexecuted_blocks=1 00:09:40.083 00:09:40.083 ' 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.083 --rc genhtml_branch_coverage=1 00:09:40.083 --rc genhtml_function_coverage=1 00:09:40.083 --rc genhtml_legend=1 00:09:40.083 --rc geninfo_all_blocks=1 00:09:40.083 --rc geninfo_unexecuted_blocks=1 00:09:40.083 00:09:40.083 ' 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.083 --rc genhtml_branch_coverage=1 00:09:40.083 --rc genhtml_function_coverage=1 00:09:40.083 --rc genhtml_legend=1 00:09:40.083 --rc geninfo_all_blocks=1 00:09:40.083 --rc geninfo_unexecuted_blocks=1 00:09:40.083 00:09:40.083 ' 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.083 --rc genhtml_branch_coverage=1 00:09:40.083 --rc genhtml_function_coverage=1 00:09:40.083 --rc genhtml_legend=1 00:09:40.083 --rc geninfo_all_blocks=1 00:09:40.083 --rc geninfo_unexecuted_blocks=1 00:09:40.083 00:09:40.083 ' 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77930 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:40.083 03:09:43 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77930 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77930 ']' 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.083 03:09:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.083 [2024-11-18 03:09:43.571294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:40.083 [2024-11-18 03:09:43.571435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77930 ] 00:09:40.342 [2024-11-18 03:09:43.715682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.342 [2024-11-18 03:09:43.749142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.342 [2024-11-18 03:09:43.749229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.908 03:09:44 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.908 03:09:44 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:40.908 03:09:44 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:41.166 Nvme0n1 00:09:41.166 03:09:44 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:41.166 03:09:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:41.427 request: 00:09:41.427 { 00:09:41.427 "bdev_name": "Nvme0n1", 00:09:41.427 "filename": "non_existing_file", 00:09:41.427 "method": "bdev_nvme_apply_firmware", 00:09:41.427 "req_id": 1 00:09:41.427 } 00:09:41.427 Got JSON-RPC error response 00:09:41.427 response: 00:09:41.427 { 00:09:41.427 "code": -32603, 00:09:41.427 "message": "open file failed." 00:09:41.427 } 00:09:41.427 03:09:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:41.427 03:09:44 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:41.427 03:09:44 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:41.689 03:09:45 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:41.689 03:09:45 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77930 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77930 ']' 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77930 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77930 00:09:41.689 killing process with pid 77930 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77930' 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77930 00:09:41.689 03:09:45 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77930 00:09:41.949 00:09:41.949 real 0m2.068s 00:09:41.949 user 0m3.993s 00:09:41.949 sys 0m0.482s 00:09:41.949 03:09:45 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.949 ************************************ 00:09:41.949 END TEST nvme_rpc 00:09:41.949 03:09:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.949 ************************************ 00:09:41.949 03:09:45 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:41.949 03:09:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.949 03:09:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.949 03:09:45 -- common/autotest_common.sh@10 -- # set +x 00:09:41.949 ************************************ 00:09:41.949 START TEST nvme_rpc_timeouts 00:09:41.949 ************************************ 00:09:41.949 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:41.949 * Looking for test storage... 00:09:41.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.949 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.949 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.949 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:42.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.265 03:09:45 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:42.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.265 --rc genhtml_branch_coverage=1 00:09:42.265 --rc genhtml_function_coverage=1 00:09:42.265 --rc genhtml_legend=1 00:09:42.265 --rc geninfo_all_blocks=1 00:09:42.265 --rc geninfo_unexecuted_blocks=1 00:09:42.265 00:09:42.265 ' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:42.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.265 --rc genhtml_branch_coverage=1 00:09:42.265 --rc genhtml_function_coverage=1 00:09:42.265 --rc genhtml_legend=1 00:09:42.265 --rc geninfo_all_blocks=1 00:09:42.265 --rc geninfo_unexecuted_blocks=1 00:09:42.265 00:09:42.265 ' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:42.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.265 --rc genhtml_branch_coverage=1 00:09:42.265 --rc genhtml_function_coverage=1 00:09:42.265 --rc genhtml_legend=1 00:09:42.265 --rc geninfo_all_blocks=1 00:09:42.265 --rc geninfo_unexecuted_blocks=1 00:09:42.265 00:09:42.265 ' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:42.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.265 --rc genhtml_branch_coverage=1 00:09:42.265 --rc genhtml_function_coverage=1 00:09:42.265 --rc genhtml_legend=1 00:09:42.265 --rc geninfo_all_blocks=1 00:09:42.265 --rc geninfo_unexecuted_blocks=1 00:09:42.265 00:09:42.265 ' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77984 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77984 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78016 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78016 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78016 ']' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.265 03:09:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:42.265 03:09:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.266 [2024-11-18 03:09:45.639531] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:42.266 [2024-11-18 03:09:45.639657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78016 ] 00:09:42.266 [2024-11-18 03:09:45.787407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.524 [2024-11-18 03:09:45.821241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.524 [2024-11-18 03:09:45.821305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.091 Checking default timeout settings: 00:09:43.091 03:09:46 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.091 03:09:46 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:43.091 03:09:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.091 03:09:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:43.349 Making settings changes with rpc: 00:09:43.349 03:09:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:43.349 03:09:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:43.608 Check default vs. modified settings: 00:09:43.608 03:09:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:43.608 03:09:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77984 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77984 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.867 Setting action_on_timeout is changed as expected. 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77984 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77984 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.867 Setting timeout_us is changed as expected. 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77984 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77984 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:43.867 Setting timeout_admin_us is changed as expected. 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:43.867 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:43.868 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:43.868 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:43.868 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77984 /tmp/settings_modified_77984 00:09:43.868 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78016 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78016 ']' 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78016 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78016 00:09:43.868 killing process with pid 78016 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78016' 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78016 00:09:43.868 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78016 00:09:44.125 RPC TIMEOUT SETTING TEST PASSED. 00:09:44.125 03:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:44.125 ************************************ 00:09:44.125 END TEST nvme_rpc_timeouts 00:09:44.125 ************************************ 00:09:44.125 00:09:44.125 real 0m2.199s 00:09:44.125 user 0m4.400s 00:09:44.125 sys 0m0.452s 00:09:44.125 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.125 03:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:44.125 03:09:47 -- spdk/autotest.sh@239 -- # uname -s 00:09:44.125 03:09:47 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:44.125 03:09:47 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:44.125 03:09:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.125 03:09:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.125 03:09:47 -- common/autotest_common.sh@10 -- # set +x 00:09:44.125 ************************************ 00:09:44.125 START TEST sw_hotplug 00:09:44.125 ************************************ 00:09:44.125 03:09:47 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:44.384 * Looking for test storage... 00:09:44.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.384 03:09:47 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.384 --rc genhtml_branch_coverage=1 00:09:44.384 --rc genhtml_function_coverage=1 00:09:44.384 --rc genhtml_legend=1 00:09:44.384 --rc geninfo_all_blocks=1 00:09:44.384 --rc geninfo_unexecuted_blocks=1 00:09:44.384 00:09:44.384 ' 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.384 --rc genhtml_branch_coverage=1 00:09:44.384 --rc genhtml_function_coverage=1 00:09:44.384 --rc genhtml_legend=1 00:09:44.384 --rc geninfo_all_blocks=1 00:09:44.384 --rc geninfo_unexecuted_blocks=1 00:09:44.384 00:09:44.384 ' 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.384 --rc genhtml_branch_coverage=1 00:09:44.384 --rc genhtml_function_coverage=1 00:09:44.384 --rc genhtml_legend=1 00:09:44.384 --rc geninfo_all_blocks=1 00:09:44.384 --rc geninfo_unexecuted_blocks=1 00:09:44.384 00:09:44.384 ' 00:09:44.384 03:09:47 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.384 --rc genhtml_branch_coverage=1 00:09:44.384 --rc genhtml_function_coverage=1 00:09:44.384 --rc genhtml_legend=1 00:09:44.384 --rc geninfo_all_blocks=1 00:09:44.384 --rc geninfo_unexecuted_blocks=1 00:09:44.384 00:09:44.384 ' 00:09:44.384 03:09:47 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.643 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.643 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.643 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:44.643 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:44.643 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:44.643 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:44.643 03:09:48 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:44.902 03:09:48 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:44.903 03:09:48 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:44.903 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:44.903 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:44.903 03:09:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:45.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.161 Waiting for block devices as requested 00:09:45.161 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.420 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.420 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.420 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.702 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.702 03:09:53 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:50.702 03:09:53 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.960 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:50.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.960 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:51.218 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:51.476 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.476 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:51.476 03:09:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78857 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:51.476 03:09:54 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:51.476 03:09:54 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:51.476 03:09:54 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:51.476 03:09:54 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:51.476 03:09:54 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:51.476 03:09:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:51.735 Initializing NVMe Controllers 00:09:51.735 Attaching to 0000:00:10.0 00:09:51.735 Attaching to 0000:00:11.0 00:09:51.735 Attached to 0000:00:10.0 00:09:51.735 Attached to 0000:00:11.0 00:09:51.735 Initialization complete. Starting I/O... 00:09:51.735 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:51.735 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:51.735 00:09:52.669 QEMU NVMe Ctrl (12340 ): 3214 I/Os completed (+3214) 00:09:52.669 QEMU NVMe Ctrl (12341 ): 3050 I/Os completed (+3050) 00:09:52.669 00:09:53.611 QEMU NVMe Ctrl (12340 ): 7002 I/Os completed (+3788) 00:09:53.611 QEMU NVMe Ctrl (12341 ): 6914 I/Os completed (+3864) 00:09:53.611 00:09:54.555 QEMU NVMe Ctrl (12340 ): 11003 I/Os completed (+4001) 00:09:54.555 QEMU NVMe Ctrl (12341 ): 10896 I/Os completed (+3982) 00:09:54.555 00:09:55.942 QEMU NVMe Ctrl (12340 ): 15448 I/Os completed (+4445) 00:09:55.942 QEMU NVMe Ctrl (12341 ): 15303 I/Os completed (+4407) 00:09:55.942 00:09:56.884 QEMU NVMe Ctrl (12340 ): 19671 I/Os completed (+4223) 00:09:56.884 QEMU NVMe Ctrl (12341 ): 19498 I/Os completed (+4195) 00:09:56.884 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.454 [2024-11-18 03:10:00.946276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:57.454 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:57.454 [2024-11-18 03:10:00.947233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.947349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.947381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.947396] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:57.454 [2024-11-18 03:10:00.948529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.948579] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.948603] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.948623] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 EAL: Cannot open sysfs resource 00:09:57.454 EAL: pci_scan_one(): cannot parse resource 00:09:57.454 EAL: Scan for (pci) bus failed. 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.454 [2024-11-18 03:10:00.969441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:57.454 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:57.454 [2024-11-18 03:10:00.970305] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.970409] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.970474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.970501] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:57.454 [2024-11-18 03:10:00.971424] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.971454] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.971467] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 [2024-11-18 03:10:00.971477] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:57.454 03:10:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.714 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.714 Attaching to 0000:00:10.0 00:09:57.714 Attached to 0000:00:10.0 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.714 03:10:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:57.714 Attaching to 0000:00:11.0 00:09:57.714 Attached to 0000:00:11.0 00:09:58.654 QEMU NVMe Ctrl (12340 ): 4369 I/Os completed (+4369) 00:09:58.654 QEMU NVMe Ctrl (12341 ): 4004 I/Os completed (+4004) 00:09:58.654 00:09:59.597 QEMU NVMe Ctrl (12340 ): 8738 I/Os completed (+4369) 00:09:59.597 QEMU NVMe Ctrl (12341 ): 8325 I/Os completed (+4321) 00:09:59.597 00:10:00.982 QEMU NVMe Ctrl (12340 ): 13429 I/Os completed (+4691) 00:10:00.982 QEMU NVMe Ctrl (12341 ): 13014 I/Os completed (+4689) 00:10:00.982 00:10:01.549 QEMU NVMe Ctrl (12340 ): 17327 I/Os completed (+3898) 00:10:01.549 QEMU NVMe Ctrl (12341 ): 16909 I/Os completed (+3895) 00:10:01.549 00:10:02.922 QEMU NVMe Ctrl (12340 ): 21035 I/Os completed (+3708) 00:10:02.922 QEMU NVMe Ctrl (12341 ): 20629 I/Os completed (+3720) 00:10:02.922 00:10:03.854 QEMU NVMe Ctrl (12340 ): 25293 I/Os completed (+4258) 00:10:03.855 QEMU NVMe Ctrl (12341 ): 24868 I/Os completed (+4239) 00:10:03.855 00:10:04.788 QEMU NVMe Ctrl (12340 ): 29594 I/Os completed (+4301) 00:10:04.788 QEMU NVMe Ctrl (12341 ): 29115 I/Os completed (+4247) 00:10:04.788 00:10:05.728 QEMU NVMe Ctrl (12340 ): 33874 I/Os completed (+4280) 00:10:05.728 QEMU NVMe Ctrl (12341 ): 33352 I/Os completed (+4237) 00:10:05.728 00:10:06.667 QEMU NVMe Ctrl (12340 ): 38086 I/Os completed (+4212) 00:10:06.667 QEMU NVMe Ctrl (12341 ): 37570 I/Os completed (+4218) 00:10:06.667 00:10:07.607 QEMU NVMe Ctrl (12340 ): 42341 I/Os completed (+4255) 00:10:07.607 QEMU NVMe Ctrl (12341 ): 41839 I/Os completed (+4269) 00:10:07.607 00:10:08.550 QEMU NVMe Ctrl (12340 ): 46530 I/Os completed (+4189) 00:10:08.550 QEMU NVMe Ctrl (12341 ): 46008 I/Os completed (+4169) 00:10:08.550 00:10:09.935 QEMU NVMe Ctrl (12340 ): 50714 I/Os completed (+4184) 00:10:09.935 QEMU NVMe Ctrl (12341 ): 50203 I/Os completed (+4195) 00:10:09.935 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.935 [2024-11-18 03:10:13.205173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:09.935 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.935 [2024-11-18 03:10:13.206056] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.206163] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.206195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.206253] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.935 [2024-11-18 03:10:13.207342] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.207435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.207523] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.207549] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.935 [2024-11-18 03:10:13.224707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:09.935 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.935 [2024-11-18 03:10:13.225465] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.225549] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.225576] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.225626] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.935 [2024-11-18 03:10:13.226497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.226588] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.226618] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 [2024-11-18 03:10:13.226674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.935 Attaching to 0000:00:10.0 00:10:09.935 Attached to 0000:00:10.0 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.935 03:10:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.935 Attaching to 0000:00:11.0 00:10:09.935 Attached to 0000:00:11.0 00:10:10.879 QEMU NVMe Ctrl (12340 ): 3274 I/Os completed (+3274) 00:10:10.879 QEMU NVMe Ctrl (12341 ): 2878 I/Os completed (+2878) 00:10:10.879 00:10:11.854 QEMU NVMe Ctrl (12340 ): 7473 I/Os completed (+4199) 00:10:11.854 QEMU NVMe Ctrl (12341 ): 7058 I/Os completed (+4180) 00:10:11.854 00:10:12.789 QEMU NVMe Ctrl (12340 ): 11262 I/Os completed (+3789) 00:10:12.789 QEMU NVMe Ctrl (12341 ): 10766 I/Os completed (+3708) 00:10:12.789 00:10:13.723 QEMU NVMe Ctrl (12340 ): 15010 I/Os completed (+3748) 00:10:13.723 QEMU NVMe Ctrl (12341 ): 14510 I/Os completed (+3744) 00:10:13.723 00:10:14.661 QEMU NVMe Ctrl (12340 ): 19027 I/Os completed (+4017) 00:10:14.661 QEMU NVMe Ctrl (12341 ): 18514 I/Os completed (+4004) 00:10:14.661 00:10:15.595 QEMU NVMe Ctrl (12340 ): 23322 I/Os completed (+4295) 00:10:15.595 QEMU NVMe Ctrl (12341 ): 22793 I/Os completed (+4279) 00:10:15.595 00:10:16.966 QEMU NVMe Ctrl (12340 ): 27600 I/Os completed (+4278) 00:10:16.966 QEMU NVMe Ctrl (12341 ): 27032 I/Os completed (+4239) 00:10:16.966 00:10:17.898 QEMU NVMe Ctrl (12340 ): 31845 I/Os completed (+4245) 00:10:17.898 QEMU NVMe Ctrl (12341 ): 31270 I/Os completed (+4238) 00:10:17.898 00:10:18.831 QEMU NVMe Ctrl (12340 ): 36097 I/Os completed (+4252) 00:10:18.831 QEMU NVMe Ctrl (12341 ): 35493 I/Os completed (+4223) 00:10:18.831 00:10:19.765 QEMU NVMe Ctrl (12340 ): 40337 I/Os completed (+4240) 00:10:19.765 QEMU NVMe Ctrl (12341 ): 39728 I/Os completed (+4235) 00:10:19.765 00:10:20.699 QEMU NVMe Ctrl (12340 ): 44597 I/Os completed (+4260) 00:10:20.699 QEMU NVMe Ctrl (12341 ): 43986 I/Os completed (+4258) 00:10:20.699 00:10:21.632 QEMU NVMe Ctrl (12340 ): 48405 I/Os completed (+3808) 00:10:21.632 QEMU NVMe Ctrl (12341 ): 47815 I/Os completed (+3829) 00:10:21.632 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.198 [2024-11-18 03:10:25.474595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:22.198 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:22.198 [2024-11-18 03:10:25.475771] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.475894] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.475929] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.475996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.198 [2024-11-18 03:10:25.477203] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.477263] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.477292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.477342] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.198 [2024-11-18 03:10:25.496706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:22.198 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:22.198 [2024-11-18 03:10:25.497674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.498087] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.498216] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.498236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.198 [2024-11-18 03:10:25.499415] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.499446] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.499462] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 [2024-11-18 03:10:25.499475] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.198 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:22.198 EAL: Scan for (pci) bus failed. 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.198 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.198 Attaching to 0000:00:10.0 00:10:22.198 Attached to 0000:00:10.0 00:10:22.457 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.457 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.457 03:10:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.457 Attaching to 0000:00:11.0 00:10:22.457 Attached to 0000:00:11.0 00:10:22.457 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.457 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.457 [2024-11-18 03:10:25.794868] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:34.736 03:10:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:34.736 03:10:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.736 03:10:37 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.85 00:10:34.736 03:10:37 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.85 00:10:34.736 03:10:37 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:34.736 03:10:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:34.736 03:10:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:34.736 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 03:10:37 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78857 00:10:41.309 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78857) - No such process 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78857 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79407 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79407 00:10:41.309 03:10:43 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.310 03:10:43 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79407 ']' 00:10:41.310 03:10:43 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.310 03:10:43 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.310 03:10:43 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.310 03:10:43 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.310 03:10:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.310 [2024-11-18 03:10:43.886949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:41.310 [2024-11-18 03:10:43.887342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79407 ] 00:10:41.310 [2024-11-18 03:10:44.037819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.310 [2024-11-18 03:10:44.089659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:41.310 03:10:44 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:41.310 03:10:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.875 03:10:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.875 03:10:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.875 03:10:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.875 03:10:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.875 [2024-11-18 03:10:50.836864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:47.875 [2024-11-18 03:10:50.837972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:50.838008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:50.838022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:50.838035] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:50.838044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:50.838051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:50.838060] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:50.838067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:50.838075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:50.838081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:50.838089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:50.838096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:51.236891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:47.875 [2024-11-18 03:10:51.238105] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:51.238138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:51.238147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:51.238160] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:51.238167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:51.238176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:51.238183] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:51.238194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:51.238200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 [2024-11-18 03:10:51.238209] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.875 [2024-11-18 03:10:51.238215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.875 [2024-11-18 03:10:51.238223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.875 03:10:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.875 03:10:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.875 03:10:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.875 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.135 03:10:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.359 03:11:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.359 03:11:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.359 03:11:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.359 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.360 03:11:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.360 03:11:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.360 03:11:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:00.360 03:11:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.360 [2024-11-18 03:11:03.737048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:00.360 [2024-11-18 03:11:03.738213] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.360 [2024-11-18 03:11:03.738245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.360 [2024-11-18 03:11:03.738257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.360 [2024-11-18 03:11:03.738268] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.360 [2024-11-18 03:11:03.738276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.360 [2024-11-18 03:11:03.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.360 [2024-11-18 03:11:03.738290] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.360 [2024-11-18 03:11:03.738296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.360 [2024-11-18 03:11:03.738304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.360 [2024-11-18 03:11:03.738310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.360 [2024-11-18 03:11:03.738332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.360 [2024-11-18 03:11:03.738338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.932 03:11:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.932 03:11:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.932 03:11:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:00.932 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.932 [2024-11-18 03:11:04.337056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:00.932 [2024-11-18 03:11:04.338152] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.932 [2024-11-18 03:11:04.338186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.932 [2024-11-18 03:11:04.338197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.932 [2024-11-18 03:11:04.338209] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.932 [2024-11-18 03:11:04.338215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.932 [2024-11-18 03:11:04.338224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.932 [2024-11-18 03:11:04.338230] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.932 [2024-11-18 03:11:04.338238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.932 [2024-11-18 03:11:04.338244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.932 [2024-11-18 03:11:04.338251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.932 [2024-11-18 03:11:04.338257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.932 [2024-11-18 03:11:04.338265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.194 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:01.194 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.194 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.194 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.194 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.194 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.194 03:11:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.194 03:11:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.194 03:11:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.455 03:11:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.455 03:11:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.455 03:11:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.455 03:11:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.686 03:11:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.686 03:11:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.686 03:11:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.686 03:11:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.686 03:11:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.686 03:11:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.686 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.686 [2024-11-18 03:11:17.137250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:13.686 [2024-11-18 03:11:17.138335] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.686 [2024-11-18 03:11:17.138362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.686 [2024-11-18 03:11:17.138376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.686 [2024-11-18 03:11:17.138388] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.686 [2024-11-18 03:11:17.138396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.686 [2024-11-18 03:11:17.138404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.686 [2024-11-18 03:11:17.138412] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.686 [2024-11-18 03:11:17.138418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.686 [2024-11-18 03:11:17.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.686 [2024-11-18 03:11:17.138432] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.686 [2024-11-18 03:11:17.138441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.686 [2024-11-18 03:11:17.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.256 [2024-11-18 03:11:17.537252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:14.256 [2024-11-18 03:11:17.538364] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.256 [2024-11-18 03:11:17.538392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.256 [2024-11-18 03:11:17.538402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.256 [2024-11-18 03:11:17.538411] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.256 [2024-11-18 03:11:17.538418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.256 [2024-11-18 03:11:17.538428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.256 [2024-11-18 03:11:17.538434] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.256 [2024-11-18 03:11:17.538442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.256 [2024-11-18 03:11:17.538448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.256 [2024-11-18 03:11:17.538455] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.256 [2024-11-18 03:11:17.538461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.256 [2024-11-18 03:11:17.538468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.256 03:11:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.256 03:11:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.256 03:11:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.256 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.257 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.517 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.517 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.517 03:11:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.18 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.18 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:11:26.743 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:26.743 03:11:29 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:26.743 03:11:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:33.332 03:11:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:33.332 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.332 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.332 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.332 03:11:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.332 03:11:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.332 03:11:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.332 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:33.332 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:33.332 [2024-11-18 03:11:36.048423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:33.332 [2024-11-18 03:11:36.049460] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.332 [2024-11-18 03:11:36.049492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.332 [2024-11-18 03:11:36.049506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.332 [2024-11-18 03:11:36.049517] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.332 [2024-11-18 03:11:36.049526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.332 [2024-11-18 03:11:36.049533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 [2024-11-18 03:11:36.049541] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.333 [2024-11-18 03:11:36.049548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.333 [2024-11-18 03:11:36.049557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 [2024-11-18 03:11:36.049567] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.333 [2024-11-18 03:11:36.049575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.333 [2024-11-18 03:11:36.049581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.333 03:11:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.333 03:11:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.333 [2024-11-18 03:11:36.548425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:33.333 [2024-11-18 03:11:36.549436] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.333 [2024-11-18 03:11:36.549466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.333 [2024-11-18 03:11:36.549476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 [2024-11-18 03:11:36.549487] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.333 [2024-11-18 03:11:36.549494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.333 [2024-11-18 03:11:36.549502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 [2024-11-18 03:11:36.549508] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.333 [2024-11-18 03:11:36.549516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.333 [2024-11-18 03:11:36.549522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 [2024-11-18 03:11:36.549530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.333 [2024-11-18 03:11:36.549536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.333 [2024-11-18 03:11:36.549545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.333 03:11:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:33.333 03:11:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.594 03:11:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.594 03:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.594 03:11:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:33.594 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.854 03:11:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:46.106 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:46.106 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:46.106 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.107 03:11:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.107 03:11:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.107 03:11:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.107 03:11:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.107 03:11:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.107 03:11:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.107 [2024-11-18 03:11:49.448619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:46.107 [2024-11-18 03:11:49.449623] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.107 [2024-11-18 03:11:49.449651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.107 [2024-11-18 03:11:49.449663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.107 [2024-11-18 03:11:49.449675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.107 [2024-11-18 03:11:49.449684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.107 [2024-11-18 03:11:49.449691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.107 [2024-11-18 03:11:49.449699] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.107 [2024-11-18 03:11:49.449706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.107 [2024-11-18 03:11:49.449714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.107 [2024-11-18 03:11:49.449720] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.107 [2024-11-18 03:11:49.449727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.107 [2024-11-18 03:11:49.449735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:46.107 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.680 [2024-11-18 03:11:49.948624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:46.680 [2024-11-18 03:11:49.949372] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.680 [2024-11-18 03:11:49.949405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.680 [2024-11-18 03:11:49.949414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.680 [2024-11-18 03:11:49.949426] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.680 [2024-11-18 03:11:49.949433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.680 [2024-11-18 03:11:49.949443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.680 [2024-11-18 03:11:49.949449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.680 [2024-11-18 03:11:49.949457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.680 [2024-11-18 03:11:49.949463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.680 [2024-11-18 03:11:49.949471] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.680 [2024-11-18 03:11:49.949477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.680 [2024-11-18 03:11:49.949485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.680 03:11:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.680 03:11:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.680 03:11:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:46.680 03:11:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.680 03:11:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.923 03:12:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.923 03:12:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.923 03:12:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.923 03:12:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.923 03:12:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.923 03:12:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:58.923 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.923 [2024-11-18 03:12:02.349050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:58.923 [2024-11-18 03:12:02.350143] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.923 [2024-11-18 03:12:02.350240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.923 [2024-11-18 03:12:02.350301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.923 [2024-11-18 03:12:02.350390] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.923 [2024-11-18 03:12:02.350414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.923 [2024-11-18 03:12:02.350475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.923 [2024-11-18 03:12:02.350509] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.923 [2024-11-18 03:12:02.350525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.923 [2024-11-18 03:12:02.350580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.923 [2024-11-18 03:12:02.350622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.923 [2024-11-18 03:12:02.350639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.923 [2024-11-18 03:12:02.350662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.184 [2024-11-18 03:12:02.749054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:59.184 [2024-11-18 03:12:02.750206] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.184 [2024-11-18 03:12:02.750302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.184 [2024-11-18 03:12:02.750378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.184 [2024-11-18 03:12:02.750430] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.184 [2024-11-18 03:12:02.750449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.184 [2024-11-18 03:12:02.750477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.184 [2024-11-18 03:12:02.750529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.184 [2024-11-18 03:12:02.750551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.184 [2024-11-18 03:12:02.750645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.184 [2024-11-18 03:12:02.750669] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.184 [2024-11-18 03:12:02.750686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.184 [2024-11-18 03:12:02.750710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.478 03:12:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.478 03:12:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.478 03:12:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.478 03:12:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.478 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.775 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.776 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.776 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.776 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.776 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.776 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.776 03:12:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.088 03:12:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.088 03:12:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.088 03:12:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.088 03:12:15 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:12:12.088 03:12:15 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:12:12.088 03:12:15 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:12.088 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:12:12.089 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:12:12.089 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:12.089 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79407 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79407 ']' 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79407 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79407 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.089 killing process with pid 79407 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79407' 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79407 00:12:12.089 03:12:15 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79407 00:12:12.089 03:12:15 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:12.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.611 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.611 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.872 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.872 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.872 00:12:12.872 real 2m28.661s 00:12:12.872 user 1m49.370s 00:12:12.872 sys 0m18.005s 00:12:12.872 03:12:16 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.872 ************************************ 00:12:12.872 END TEST sw_hotplug 00:12:12.872 ************************************ 00:12:12.872 03:12:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.872 03:12:16 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:12.872 03:12:16 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.872 03:12:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.872 03:12:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.872 03:12:16 -- common/autotest_common.sh@10 -- # set +x 00:12:12.872 ************************************ 00:12:12.872 START TEST nvme_xnvme 00:12:12.872 ************************************ 00:12:12.872 03:12:16 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.872 * Looking for test storage... 00:12:12.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:12.872 03:12:16 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:12.872 03:12:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:12.872 03:12:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 03:12:16 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.134 03:12:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.134 03:12:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.134 03:12:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.134 03:12:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.134 03:12:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:13.134 03:12:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.134 03:12:16 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.134 03:12:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.134 ************************************ 00:12:13.134 START TEST xnvme_to_malloc_dd_copy 00:12:13.134 ************************************ 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:13.134 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:13.135 03:12:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.135 { 00:12:13.135 "subsystems": [ 00:12:13.135 { 00:12:13.135 "subsystem": "bdev", 00:12:13.135 "config": [ 00:12:13.135 { 00:12:13.135 "params": { 00:12:13.135 "block_size": 512, 00:12:13.135 "num_blocks": 2097152, 00:12:13.135 "name": "malloc0" 00:12:13.135 }, 00:12:13.135 "method": "bdev_malloc_create" 00:12:13.135 }, 00:12:13.135 { 00:12:13.135 "params": { 00:12:13.135 "io_mechanism": "libaio", 00:12:13.135 "filename": "/dev/nullb0", 00:12:13.135 "name": "null0" 00:12:13.135 }, 00:12:13.135 "method": "bdev_xnvme_create" 00:12:13.135 }, 00:12:13.135 { 00:12:13.135 "method": "bdev_wait_for_examine" 00:12:13.135 } 00:12:13.135 ] 00:12:13.135 } 00:12:13.135 ] 00:12:13.135 } 00:12:13.135 [2024-11-18 03:12:16.629744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:13.135 [2024-11-18 03:12:16.629922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80784 ] 00:12:13.396 [2024-11-18 03:12:16.779992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.396 [2024-11-18 03:12:16.822213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.780  [2024-11-18T03:12:19.302Z] Copying: 305/1024 [MB] (305 MBps) [2024-11-18T03:12:20.244Z] Copying: 613/1024 [MB] (308 MBps) [2024-11-18T03:12:20.505Z] Copying: 919/1024 [MB] (305 MBps) [2024-11-18T03:12:20.766Z] Copying: 1024/1024 [MB] (average 306 MBps) 00:12:17.189 00:12:17.189 03:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:17.189 03:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:17.189 03:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:17.189 03:12:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:17.451 { 00:12:17.451 "subsystems": [ 00:12:17.451 { 00:12:17.451 "subsystem": "bdev", 00:12:17.451 "config": [ 00:12:17.451 { 00:12:17.451 "params": { 00:12:17.451 "block_size": 512, 00:12:17.451 "num_blocks": 2097152, 00:12:17.451 "name": "malloc0" 00:12:17.451 }, 00:12:17.451 "method": "bdev_malloc_create" 00:12:17.451 }, 00:12:17.451 { 00:12:17.451 "params": { 00:12:17.451 "io_mechanism": "libaio", 00:12:17.451 "filename": "/dev/nullb0", 00:12:17.451 "name": "null0" 00:12:17.451 }, 00:12:17.451 "method": "bdev_xnvme_create" 00:12:17.451 }, 00:12:17.451 { 00:12:17.451 "method": "bdev_wait_for_examine" 00:12:17.451 } 00:12:17.451 ] 00:12:17.451 } 00:12:17.451 ] 00:12:17.451 } 00:12:17.451 [2024-11-18 03:12:20.797722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:17.451 [2024-11-18 03:12:20.797866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80839 ] 00:12:17.451 [2024-11-18 03:12:20.944427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.451 [2024-11-18 03:12:20.986903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.836  [2024-11-18T03:12:23.357Z] Copying: 310/1024 [MB] (310 MBps) [2024-11-18T03:12:24.300Z] Copying: 621/1024 [MB] (311 MBps) [2024-11-18T03:12:24.561Z] Copying: 933/1024 [MB] (311 MBps) [2024-11-18T03:12:25.133Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:12:21.556 00:12:21.556 03:12:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:21.556 03:12:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:21.556 03:12:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:21.556 03:12:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:21.556 03:12:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:21.556 03:12:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:21.556 { 00:12:21.556 "subsystems": [ 00:12:21.556 { 00:12:21.556 "subsystem": "bdev", 00:12:21.556 "config": [ 00:12:21.556 { 00:12:21.556 "params": { 00:12:21.556 "block_size": 512, 00:12:21.556 "num_blocks": 2097152, 00:12:21.556 "name": "malloc0" 00:12:21.556 }, 00:12:21.556 "method": "bdev_malloc_create" 00:12:21.556 }, 00:12:21.556 { 00:12:21.556 "params": { 00:12:21.556 "io_mechanism": "io_uring", 00:12:21.556 "filename": "/dev/nullb0", 00:12:21.556 "name": "null0" 00:12:21.556 }, 00:12:21.556 "method": "bdev_xnvme_create" 00:12:21.556 }, 00:12:21.556 { 00:12:21.556 "method": "bdev_wait_for_examine" 00:12:21.556 } 00:12:21.556 ] 00:12:21.556 } 00:12:21.556 ] 00:12:21.556 } 00:12:21.556 [2024-11-18 03:12:24.936865] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:21.556 [2024-11-18 03:12:24.936983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80888 ] 00:12:21.556 [2024-11-18 03:12:25.081830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.556 [2024-11-18 03:12:25.112192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.942  [2024-11-18T03:12:27.462Z] Copying: 319/1024 [MB] (319 MBps) [2024-11-18T03:12:28.404Z] Copying: 639/1024 [MB] (320 MBps) [2024-11-18T03:12:28.665Z] Copying: 959/1024 [MB] (320 MBps) [2024-11-18T03:12:28.927Z] Copying: 1024/1024 [MB] (average 320 MBps) 00:12:25.350 00:12:25.350 03:12:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:25.350 03:12:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:25.350 03:12:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.350 03:12:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.350 { 00:12:25.350 "subsystems": [ 00:12:25.350 { 00:12:25.350 "subsystem": "bdev", 00:12:25.350 "config": [ 00:12:25.350 { 00:12:25.350 "params": { 00:12:25.350 "block_size": 512, 00:12:25.350 "num_blocks": 2097152, 00:12:25.350 "name": "malloc0" 00:12:25.350 }, 00:12:25.350 "method": "bdev_malloc_create" 00:12:25.350 }, 00:12:25.350 { 00:12:25.350 "params": { 00:12:25.350 "io_mechanism": "io_uring", 00:12:25.350 "filename": "/dev/nullb0", 00:12:25.350 "name": "null0" 00:12:25.350 }, 00:12:25.350 "method": "bdev_xnvme_create" 00:12:25.350 }, 00:12:25.350 { 00:12:25.350 "method": "bdev_wait_for_examine" 00:12:25.350 } 00:12:25.350 ] 00:12:25.350 } 00:12:25.350 ] 00:12:25.350 } 00:12:25.350 [2024-11-18 03:12:28.905519] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:25.351 [2024-11-18 03:12:28.905637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80943 ] 00:12:25.612 [2024-11-18 03:12:29.052466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.612 [2024-11-18 03:12:29.094746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.996  [2024-11-18T03:12:31.513Z] Copying: 321/1024 [MB] (321 MBps) [2024-11-18T03:12:32.455Z] Copying: 644/1024 [MB] (323 MBps) [2024-11-18T03:12:32.716Z] Copying: 967/1024 [MB] (322 MBps) [2024-11-18T03:12:32.977Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:12:29.400 00:12:29.400 03:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:29.400 03:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:29.400 00:12:29.400 real 0m16.323s 00:12:29.400 user 0m13.472s 00:12:29.400 sys 0m2.369s 00:12:29.400 03:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.400 ************************************ 00:12:29.400 END TEST xnvme_to_malloc_dd_copy 00:12:29.400 ************************************ 00:12:29.400 03:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.400 03:12:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:29.400 03:12:32 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:29.400 03:12:32 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.400 03:12:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.400 ************************************ 00:12:29.400 START TEST xnvme_bdevperf 00:12:29.400 ************************************ 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:29.400 03:12:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:29.662 { 00:12:29.662 "subsystems": [ 00:12:29.662 { 00:12:29.662 "subsystem": "bdev", 00:12:29.662 "config": [ 00:12:29.662 { 00:12:29.662 "params": { 00:12:29.662 "io_mechanism": "libaio", 00:12:29.662 "filename": "/dev/nullb0", 00:12:29.662 "name": "null0" 00:12:29.662 }, 00:12:29.662 "method": "bdev_xnvme_create" 00:12:29.662 }, 00:12:29.662 { 00:12:29.662 "method": "bdev_wait_for_examine" 00:12:29.662 } 00:12:29.662 ] 00:12:29.662 } 00:12:29.662 ] 00:12:29.662 } 00:12:29.662 [2024-11-18 03:12:32.998532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:29.662 [2024-11-18 03:12:32.998644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81020 ] 00:12:29.662 [2024-11-18 03:12:33.145636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.662 [2024-11-18 03:12:33.188676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.924 Running I/O for 5 seconds... 00:12:31.809 209600.00 IOPS, 818.75 MiB/s [2024-11-18T03:12:36.329Z] 209952.00 IOPS, 820.12 MiB/s [2024-11-18T03:12:37.722Z] 210026.67 IOPS, 820.42 MiB/s [2024-11-18T03:12:38.297Z] 210016.00 IOPS, 820.38 MiB/s 00:12:34.720 Latency(us) 00:12:34.720 [2024-11-18T03:12:38.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.720 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:34.720 null0 : 5.00 210015.84 820.37 0.00 0.00 302.69 107.91 1512.37 00:12:34.720 [2024-11-18T03:12:38.297Z] =================================================================================================================== 00:12:34.720 [2024-11-18T03:12:38.297Z] Total : 210015.84 820.37 0.00 0.00 302.69 107.91 1512.37 00:12:34.981 03:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:34.981 03:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:34.981 03:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:34.981 03:12:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:34.981 03:12:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:34.981 03:12:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:34.981 { 00:12:34.981 "subsystems": [ 00:12:34.981 { 00:12:34.981 "subsystem": "bdev", 00:12:34.981 "config": [ 00:12:34.981 { 00:12:34.981 "params": { 00:12:34.981 "io_mechanism": "io_uring", 00:12:34.981 "filename": "/dev/nullb0", 00:12:34.981 "name": "null0" 00:12:34.981 }, 00:12:34.981 "method": "bdev_xnvme_create" 00:12:34.981 }, 00:12:34.981 { 00:12:34.981 "method": "bdev_wait_for_examine" 00:12:34.981 } 00:12:34.981 ] 00:12:34.981 } 00:12:34.981 ] 00:12:34.981 } 00:12:34.981 [2024-11-18 03:12:38.490994] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:34.981 [2024-11-18 03:12:38.491114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81084 ] 00:12:35.243 [2024-11-18 03:12:38.637995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.243 [2024-11-18 03:12:38.667781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.243 Running I/O for 5 seconds... 00:12:37.576 238720.00 IOPS, 932.50 MiB/s [2024-11-18T03:12:42.096Z] 238336.00 IOPS, 931.00 MiB/s [2024-11-18T03:12:43.039Z] 238421.33 IOPS, 931.33 MiB/s [2024-11-18T03:12:43.980Z] 238464.00 IOPS, 931.50 MiB/s 00:12:40.403 Latency(us) 00:12:40.403 [2024-11-18T03:12:43.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.403 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:40.403 null0 : 5.00 238443.49 931.42 0.00 0.00 266.15 145.72 1474.56 00:12:40.403 [2024-11-18T03:12:43.980Z] =================================================================================================================== 00:12:40.403 [2024-11-18T03:12:43.980Z] Total : 238443.49 931.42 0.00 0.00 266.15 145.72 1474.56 00:12:40.403 03:12:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:40.403 03:12:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:40.403 00:12:40.403 real 0m10.996s 00:12:40.403 user 0m8.603s 00:12:40.403 sys 0m2.164s 00:12:40.403 03:12:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.403 03:12:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:40.403 ************************************ 00:12:40.403 END TEST xnvme_bdevperf 00:12:40.403 ************************************ 00:12:40.403 00:12:40.403 real 0m27.582s 00:12:40.403 user 0m22.191s 00:12:40.403 sys 0m4.644s 00:12:40.404 03:12:43 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.404 ************************************ 00:12:40.404 03:12:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.404 END TEST nvme_xnvme 00:12:40.404 ************************************ 00:12:40.665 03:12:44 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:40.665 03:12:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.665 03:12:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.665 03:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:40.665 ************************************ 00:12:40.665 START TEST blockdev_xnvme 00:12:40.665 ************************************ 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:40.665 * Looking for test storage... 00:12:40.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.665 03:12:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:40.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.665 --rc genhtml_branch_coverage=1 00:12:40.665 --rc genhtml_function_coverage=1 00:12:40.665 --rc genhtml_legend=1 00:12:40.665 --rc geninfo_all_blocks=1 00:12:40.665 --rc geninfo_unexecuted_blocks=1 00:12:40.665 00:12:40.665 ' 00:12:40.665 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:40.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.666 --rc genhtml_branch_coverage=1 00:12:40.666 --rc genhtml_function_coverage=1 00:12:40.666 --rc genhtml_legend=1 00:12:40.666 --rc geninfo_all_blocks=1 00:12:40.666 --rc geninfo_unexecuted_blocks=1 00:12:40.666 00:12:40.666 ' 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:40.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.666 --rc genhtml_branch_coverage=1 00:12:40.666 --rc genhtml_function_coverage=1 00:12:40.666 --rc genhtml_legend=1 00:12:40.666 --rc geninfo_all_blocks=1 00:12:40.666 --rc geninfo_unexecuted_blocks=1 00:12:40.666 00:12:40.666 ' 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:40.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.666 --rc genhtml_branch_coverage=1 00:12:40.666 --rc genhtml_function_coverage=1 00:12:40.666 --rc genhtml_legend=1 00:12:40.666 --rc geninfo_all_blocks=1 00:12:40.666 --rc geninfo_unexecuted_blocks=1 00:12:40.666 00:12:40.666 ' 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81221 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81221 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81221 ']' 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.666 03:12:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.666 03:12:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.927 [2024-11-18 03:12:44.241243] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:40.927 [2024-11-18 03:12:44.241354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81221 ] 00:12:40.927 [2024-11-18 03:12:44.380991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.927 [2024-11-18 03:12:44.412527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.500 03:12:45 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.500 03:12:45 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:41.500 03:12:45 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:41.500 03:12:45 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:41.500 03:12:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:41.500 03:12:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:41.500 03:12:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:41.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:42.022 Waiting for block devices as requested 00:12:42.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.022 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.283 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:42.283 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.573 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:47.573 03:12:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:47.573 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:47.574 nvme0n1 00:12:47.574 nvme1n1 00:12:47.574 nvme2n1 00:12:47.574 nvme2n2 00:12:47.574 nvme2n3 00:12:47.574 nvme3n1 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3d5af972-8ea5-426b-92db-80a38d8f366a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3d5af972-8ea5-426b-92db-80a38d8f366a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b4f211df-307b-43a8-ab3c-40d4786037ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4f211df-307b-43a8-ab3c-40d4786037ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a03a6771-a343-460c-834a-61f91147edd1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a03a6771-a343-460c-834a-61f91147edd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c5efb6b9-b866-4f62-9eff-dd79efba8aa3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5efb6b9-b866-4f62-9eff-dd79efba8aa3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "fe5987fe-ef1a-4456-9e06-e3f5338b1305"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe5987fe-ef1a-4456-9e06-e3f5338b1305",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a4db3df3-9d3b-4b84-8360-f9f0482c94df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a4db3df3-9d3b-4b84-8360-f9f0482c94df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:47.574 03:12:50 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81221 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81221 ']' 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81221 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81221 00:12:47.574 killing process with pid 81221 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81221' 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81221 00:12:47.574 03:12:50 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81221 00:12:47.833 03:12:51 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:47.833 03:12:51 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:47.833 03:12:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:47.833 03:12:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.833 03:12:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.833 ************************************ 00:12:47.833 START TEST bdev_hello_world 00:12:47.833 ************************************ 00:12:47.833 03:12:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:47.833 [2024-11-18 03:12:51.281431] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:47.833 [2024-11-18 03:12:51.281664] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81568 ] 00:12:48.092 [2024-11-18 03:12:51.429848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.092 [2024-11-18 03:12:51.463113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.092 [2024-11-18 03:12:51.627899] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:48.092 [2024-11-18 03:12:51.628096] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:48.092 [2024-11-18 03:12:51.628139] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:48.092 [2024-11-18 03:12:51.630494] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:48.092 [2024-11-18 03:12:51.630939] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:48.092 [2024-11-18 03:12:51.631028] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:48.092 [2024-11-18 03:12:51.631306] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:48.092 00:12:48.092 [2024-11-18 03:12:51.631490] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:48.350 00:12:48.350 real 0m0.560s 00:12:48.350 user 0m0.289s 00:12:48.350 sys 0m0.155s 00:12:48.350 03:12:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.350 ************************************ 00:12:48.350 END TEST bdev_hello_world 00:12:48.350 ************************************ 00:12:48.350 03:12:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 03:12:51 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:48.350 03:12:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:48.350 03:12:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.350 03:12:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 ************************************ 00:12:48.350 START TEST bdev_bounds 00:12:48.350 ************************************ 00:12:48.350 Process bdevio pid: 81593 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81593 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81593' 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81593 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81593 ']' 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.350 03:12:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 [2024-11-18 03:12:51.909414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:48.351 [2024-11-18 03:12:51.909651] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81593 ] 00:12:48.609 [2024-11-18 03:12:52.056090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:48.609 [2024-11-18 03:12:52.091660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.609 [2024-11-18 03:12:52.091888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.609 [2024-11-18 03:12:52.091973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.176 03:12:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.176 03:12:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:49.176 03:12:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:49.468 I/O targets: 00:12:49.468 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:49.468 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:49.468 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:49.468 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:49.468 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:49.468 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:49.468 00:12:49.468 00:12:49.468 CUnit - A unit testing framework for C - Version 2.1-3 00:12:49.468 http://cunit.sourceforge.net/ 00:12:49.468 00:12:49.468 00:12:49.468 Suite: bdevio tests on: nvme3n1 00:12:49.468 Test: blockdev write read block ...passed 00:12:49.468 Test: blockdev write zeroes read block ...passed 00:12:49.468 Test: blockdev write zeroes read no split ...passed 00:12:49.468 Test: blockdev write zeroes read split ...passed 00:12:49.468 Test: blockdev write zeroes read split partial ...passed 00:12:49.468 Test: blockdev reset ...passed 00:12:49.468 Test: blockdev write read 8 blocks ...passed 00:12:49.468 Test: blockdev write read size > 128k ...passed 00:12:49.468 Test: blockdev write read invalid size ...passed 00:12:49.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.468 Test: blockdev write read max offset ...passed 00:12:49.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.468 Test: blockdev writev readv 8 blocks ...passed 00:12:49.468 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.468 Test: blockdev writev readv block ...passed 00:12:49.468 Test: blockdev writev readv size > 128k ...passed 00:12:49.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.468 Test: blockdev comparev and writev ...passed 00:12:49.468 Test: blockdev nvme passthru rw ...passed 00:12:49.468 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.468 Test: blockdev nvme admin passthru ...passed 00:12:49.468 Test: blockdev copy ...passed 00:12:49.468 Suite: bdevio tests on: nvme2n3 00:12:49.468 Test: blockdev write read block ...passed 00:12:49.468 Test: blockdev write zeroes read block ...passed 00:12:49.468 Test: blockdev write zeroes read no split ...passed 00:12:49.468 Test: blockdev write zeroes read split ...passed 00:12:49.468 Test: blockdev write zeroes read split partial ...passed 00:12:49.468 Test: blockdev reset ...passed 00:12:49.468 Test: blockdev write read 8 blocks ...passed 00:12:49.468 Test: blockdev write read size > 128k ...passed 00:12:49.468 Test: blockdev write read invalid size ...passed 00:12:49.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.468 Test: blockdev write read max offset ...passed 00:12:49.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.468 Test: blockdev writev readv 8 blocks ...passed 00:12:49.468 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.468 Test: blockdev writev readv block ...passed 00:12:49.468 Test: blockdev writev readv size > 128k ...passed 00:12:49.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.468 Test: blockdev comparev and writev ...passed 00:12:49.468 Test: blockdev nvme passthru rw ...passed 00:12:49.468 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.468 Test: blockdev nvme admin passthru ...passed 00:12:49.468 Test: blockdev copy ...passed 00:12:49.468 Suite: bdevio tests on: nvme2n2 00:12:49.468 Test: blockdev write read block ...passed 00:12:49.468 Test: blockdev write zeroes read block ...passed 00:12:49.468 Test: blockdev write zeroes read no split ...passed 00:12:49.468 Test: blockdev write zeroes read split ...passed 00:12:49.468 Test: blockdev write zeroes read split partial ...passed 00:12:49.468 Test: blockdev reset ...passed 00:12:49.468 Test: blockdev write read 8 blocks ...passed 00:12:49.468 Test: blockdev write read size > 128k ...passed 00:12:49.468 Test: blockdev write read invalid size ...passed 00:12:49.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.468 Test: blockdev write read max offset ...passed 00:12:49.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.468 Test: blockdev writev readv 8 blocks ...passed 00:12:49.468 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.468 Test: blockdev writev readv block ...passed 00:12:49.468 Test: blockdev writev readv size > 128k ...passed 00:12:49.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.468 Test: blockdev comparev and writev ...passed 00:12:49.468 Test: blockdev nvme passthru rw ...passed 00:12:49.468 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.468 Test: blockdev nvme admin passthru ...passed 00:12:49.468 Test: blockdev copy ...passed 00:12:49.468 Suite: bdevio tests on: nvme2n1 00:12:49.468 Test: blockdev write read block ...passed 00:12:49.468 Test: blockdev write zeroes read block ...passed 00:12:49.468 Test: blockdev write zeroes read no split ...passed 00:12:49.468 Test: blockdev write zeroes read split ...passed 00:12:49.468 Test: blockdev write zeroes read split partial ...passed 00:12:49.468 Test: blockdev reset ...passed 00:12:49.468 Test: blockdev write read 8 blocks ...passed 00:12:49.468 Test: blockdev write read size > 128k ...passed 00:12:49.468 Test: blockdev write read invalid size ...passed 00:12:49.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.469 Test: blockdev write read max offset ...passed 00:12:49.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.469 Test: blockdev writev readv 8 blocks ...passed 00:12:49.469 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.469 Test: blockdev writev readv block ...passed 00:12:49.469 Test: blockdev writev readv size > 128k ...passed 00:12:49.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.469 Test: blockdev comparev and writev ...passed 00:12:49.469 Test: blockdev nvme passthru rw ...passed 00:12:49.469 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.469 Test: blockdev nvme admin passthru ...passed 00:12:49.469 Test: blockdev copy ...passed 00:12:49.469 Suite: bdevio tests on: nvme1n1 00:12:49.469 Test: blockdev write read block ...passed 00:12:49.469 Test: blockdev write zeroes read block ...passed 00:12:49.469 Test: blockdev write zeroes read no split ...passed 00:12:49.469 Test: blockdev write zeroes read split ...passed 00:12:49.469 Test: blockdev write zeroes read split partial ...passed 00:12:49.469 Test: blockdev reset ...passed 00:12:49.469 Test: blockdev write read 8 blocks ...passed 00:12:49.469 Test: blockdev write read size > 128k ...passed 00:12:49.469 Test: blockdev write read invalid size ...passed 00:12:49.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.469 Test: blockdev write read max offset ...passed 00:12:49.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.469 Test: blockdev writev readv 8 blocks ...passed 00:12:49.469 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.757 Test: blockdev writev readv block ...passed 00:12:49.758 Test: blockdev writev readv size > 128k ...passed 00:12:49.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.758 Test: blockdev comparev and writev ...passed 00:12:49.758 Test: blockdev nvme passthru rw ...passed 00:12:49.758 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.758 Test: blockdev nvme admin passthru ...passed 00:12:49.758 Test: blockdev copy ...passed 00:12:49.758 Suite: bdevio tests on: nvme0n1 00:12:49.758 Test: blockdev write read block ...passed 00:12:49.758 Test: blockdev write zeroes read block ...passed 00:12:49.758 Test: blockdev write zeroes read no split ...passed 00:12:49.758 Test: blockdev write zeroes read split ...passed 00:12:49.758 Test: blockdev write zeroes read split partial ...passed 00:12:49.758 Test: blockdev reset ...passed 00:12:49.758 Test: blockdev write read 8 blocks ...passed 00:12:49.758 Test: blockdev write read size > 128k ...passed 00:12:49.758 Test: blockdev write read invalid size ...passed 00:12:49.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.758 Test: blockdev write read max offset ...passed 00:12:49.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.758 Test: blockdev writev readv 8 blocks ...passed 00:12:49.758 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.758 Test: blockdev writev readv block ...passed 00:12:49.758 Test: blockdev writev readv size > 128k ...passed 00:12:49.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.758 Test: blockdev comparev and writev ...passed 00:12:49.758 Test: blockdev nvme passthru rw ...passed 00:12:49.758 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.758 Test: blockdev nvme admin passthru ...passed 00:12:49.758 Test: blockdev copy ...passed 00:12:49.758 00:12:49.758 Run Summary: Type Total Ran Passed Failed Inactive 00:12:49.758 suites 6 6 n/a 0 0 00:12:49.758 tests 138 138 138 0 0 00:12:49.758 asserts 780 780 780 0 n/a 00:12:49.758 00:12:49.758 Elapsed time = 0.470 seconds 00:12:49.758 0 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81593 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81593 ']' 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81593 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81593 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81593' 00:12:49.758 killing process with pid 81593 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81593 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81593 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:49.758 00:12:49.758 real 0m1.420s 00:12:49.758 user 0m3.509s 00:12:49.758 sys 0m0.287s 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.758 ************************************ 00:12:49.758 END TEST bdev_bounds 00:12:49.758 03:12:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:49.758 ************************************ 00:12:49.758 03:12:53 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:49.758 03:12:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:49.758 03:12:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.758 03:12:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.019 ************************************ 00:12:50.019 START TEST bdev_nbd 00:12:50.019 ************************************ 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81645 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81645 /var/tmp/spdk-nbd.sock 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81645 ']' 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:50.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.019 03:12:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:50.019 [2024-11-18 03:12:53.402086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:50.019 [2024-11-18 03:12:53.402233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.019 [2024-11-18 03:12:53.554729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.280 [2024-11-18 03:12:53.606954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.852 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.113 1+0 records in 00:12:51.113 1+0 records out 00:12:51.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832409 s, 4.9 MB/s 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.113 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.375 1+0 records in 00:12:51.375 1+0 records out 00:12:51.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128999 s, 3.2 MB/s 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.375 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:51.637 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:51.637 03:12:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.637 1+0 records in 00:12:51.637 1+0 records out 00:12:51.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695538 s, 5.9 MB/s 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.637 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.903 1+0 records in 00:12:51.903 1+0 records out 00:12:51.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115292 s, 3.6 MB/s 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.903 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.167 1+0 records in 00:12:52.167 1+0 records out 00:12:52.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715013 s, 5.7 MB/s 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.167 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.429 1+0 records in 00:12:52.429 1+0 records out 00:12:52.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106833 s, 3.8 MB/s 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:52.429 { 00:12:52.429 "nbd_device": "/dev/nbd0", 00:12:52.429 "bdev_name": "nvme0n1" 00:12:52.429 }, 00:12:52.429 { 00:12:52.429 "nbd_device": "/dev/nbd1", 00:12:52.429 "bdev_name": "nvme1n1" 00:12:52.429 }, 00:12:52.429 { 00:12:52.429 "nbd_device": "/dev/nbd2", 00:12:52.429 "bdev_name": "nvme2n1" 00:12:52.429 }, 00:12:52.429 { 00:12:52.429 "nbd_device": "/dev/nbd3", 00:12:52.429 "bdev_name": "nvme2n2" 00:12:52.429 }, 00:12:52.429 { 00:12:52.429 "nbd_device": "/dev/nbd4", 00:12:52.429 "bdev_name": "nvme2n3" 00:12:52.429 }, 00:12:52.429 { 00:12:52.429 "nbd_device": "/dev/nbd5", 00:12:52.429 "bdev_name": "nvme3n1" 00:12:52.429 } 00:12:52.429 ]' 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:52.429 03:12:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:52.691 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:52.691 { 00:12:52.691 "nbd_device": "/dev/nbd0", 00:12:52.691 "bdev_name": "nvme0n1" 00:12:52.691 }, 00:12:52.691 { 00:12:52.691 "nbd_device": "/dev/nbd1", 00:12:52.691 "bdev_name": "nvme1n1" 00:12:52.691 }, 00:12:52.691 { 00:12:52.691 "nbd_device": "/dev/nbd2", 00:12:52.691 "bdev_name": "nvme2n1" 00:12:52.691 }, 00:12:52.691 { 00:12:52.691 "nbd_device": "/dev/nbd3", 00:12:52.691 "bdev_name": "nvme2n2" 00:12:52.691 }, 00:12:52.691 { 00:12:52.691 "nbd_device": "/dev/nbd4", 00:12:52.691 "bdev_name": "nvme2n3" 00:12:52.691 }, 00:12:52.691 { 00:12:52.691 "nbd_device": "/dev/nbd5", 00:12:52.691 "bdev_name": "nvme3n1" 00:12:52.691 } 00:12:52.691 ]' 00:12:52.691 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:52.691 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.691 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:52.691 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.691 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.692 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.953 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.214 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.476 03:12:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.737 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:53.998 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:54.259 /dev/nbd0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.259 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.520 1+0 records in 00:12:54.520 1+0 records out 00:12:54.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721948 s, 5.7 MB/s 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.520 03:12:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:54.520 /dev/nbd1 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.520 1+0 records in 00:12:54.520 1+0 records out 00:12:54.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660554 s, 6.2 MB/s 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.520 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:54.780 /dev/nbd10 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.780 1+0 records in 00:12:54.780 1+0 records out 00:12:54.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365736 s, 11.2 MB/s 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.780 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:55.038 /dev/nbd11 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.038 1+0 records in 00:12:55.038 1+0 records out 00:12:55.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450302 s, 9.1 MB/s 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.038 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:55.297 /dev/nbd12 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.297 1+0 records in 00:12:55.297 1+0 records out 00:12:55.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435034 s, 9.4 MB/s 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.297 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:55.556 /dev/nbd13 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.556 1+0 records in 00:12:55.556 1+0 records out 00:12:55.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560353 s, 7.3 MB/s 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:55.556 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.557 03:12:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.816 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:55.816 { 00:12:55.816 "nbd_device": "/dev/nbd0", 00:12:55.816 "bdev_name": "nvme0n1" 00:12:55.816 }, 00:12:55.816 { 00:12:55.816 "nbd_device": "/dev/nbd1", 00:12:55.816 "bdev_name": "nvme1n1" 00:12:55.816 }, 00:12:55.816 { 00:12:55.816 "nbd_device": "/dev/nbd10", 00:12:55.816 "bdev_name": "nvme2n1" 00:12:55.816 }, 00:12:55.816 { 00:12:55.816 "nbd_device": "/dev/nbd11", 00:12:55.816 "bdev_name": "nvme2n2" 00:12:55.816 }, 00:12:55.816 { 00:12:55.817 "nbd_device": "/dev/nbd12", 00:12:55.817 "bdev_name": "nvme2n3" 00:12:55.817 }, 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd13", 00:12:55.817 "bdev_name": "nvme3n1" 00:12:55.817 } 00:12:55.817 ]' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd0", 00:12:55.817 "bdev_name": "nvme0n1" 00:12:55.817 }, 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd1", 00:12:55.817 "bdev_name": "nvme1n1" 00:12:55.817 }, 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd10", 00:12:55.817 "bdev_name": "nvme2n1" 00:12:55.817 }, 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd11", 00:12:55.817 "bdev_name": "nvme2n2" 00:12:55.817 }, 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd12", 00:12:55.817 "bdev_name": "nvme2n3" 00:12:55.817 }, 00:12:55.817 { 00:12:55.817 "nbd_device": "/dev/nbd13", 00:12:55.817 "bdev_name": "nvme3n1" 00:12:55.817 } 00:12:55.817 ]' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:55.817 /dev/nbd1 00:12:55.817 /dev/nbd10 00:12:55.817 /dev/nbd11 00:12:55.817 /dev/nbd12 00:12:55.817 /dev/nbd13' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:55.817 /dev/nbd1 00:12:55.817 /dev/nbd10 00:12:55.817 /dev/nbd11 00:12:55.817 /dev/nbd12 00:12:55.817 /dev/nbd13' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:55.817 256+0 records in 00:12:55.817 256+0 records out 00:12:55.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077532 s, 135 MB/s 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:55.817 256+0 records in 00:12:55.817 256+0 records out 00:12:55.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0627202 s, 16.7 MB/s 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.817 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:56.076 256+0 records in 00:12:56.076 256+0 records out 00:12:56.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.282876 s, 3.7 MB/s 00:12:56.076 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.076 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:56.337 256+0 records in 00:12:56.337 256+0 records out 00:12:56.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233689 s, 4.5 MB/s 00:12:56.337 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.337 03:12:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:56.598 256+0 records in 00:12:56.598 256+0 records out 00:12:56.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187139 s, 5.6 MB/s 00:12:56.598 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.598 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:56.860 256+0 records in 00:12:56.860 256+0 records out 00:12:56.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224173 s, 4.7 MB/s 00:12:56.860 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.860 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:57.121 256+0 records in 00:12:57.121 256+0 records out 00:12:57.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244126 s, 4.3 MB/s 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.121 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.381 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.642 03:13:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.642 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.903 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.163 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.422 03:13:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:58.680 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:58.938 malloc_lvol_verify 00:12:58.938 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:58.938 4e3f9635-c995-42c3-8468-a880599e7a15 00:12:58.939 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:59.197 2c44575e-b49d-4fea-9715-076a4747ccd9 00:12:59.197 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:59.455 /dev/nbd0 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:59.455 mke2fs 1.47.0 (5-Feb-2023) 00:12:59.455 Discarding device blocks: 0/4096 done 00:12:59.455 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:59.455 00:12:59.455 Allocating group tables: 0/1 done 00:12:59.455 Writing inode tables: 0/1 done 00:12:59.455 Creating journal (1024 blocks): done 00:12:59.455 Writing superblocks and filesystem accounting information: 0/1 done 00:12:59.455 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.455 03:13:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81645 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81645 ']' 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81645 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81645 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.714 killing process with pid 81645 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81645' 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81645 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81645 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:59.714 00:12:59.714 real 0m9.916s 00:12:59.714 user 0m13.719s 00:12:59.714 sys 0m3.501s 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.714 03:13:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:59.714 ************************************ 00:12:59.714 END TEST bdev_nbd 00:12:59.714 ************************************ 00:12:59.974 03:13:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:59.974 03:13:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:59.974 03:13:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:59.974 03:13:03 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:59.974 03:13:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.974 03:13:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.974 03:13:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.974 ************************************ 00:12:59.974 START TEST bdev_fio 00:12:59.974 ************************************ 00:12:59.974 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:59.974 ************************************ 00:12:59.974 START TEST bdev_fio_rw_verify 00:12:59.974 ************************************ 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:59.974 03:13:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:00.232 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:00.233 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:00.233 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:00.233 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:00.233 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:00.233 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:00.233 fio-3.35 00:13:00.233 Starting 6 threads 00:13:12.501 00:13:12.501 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82045: Mon Nov 18 03:13:14 2024 00:13:12.501 read: IOPS=18.0k, BW=70.4MiB/s (73.9MB/s)(705MiB/10004msec) 00:13:12.501 slat (usec): min=2, max=12155, avg= 5.59, stdev=31.76 00:13:12.501 clat (usec): min=77, max=14534, avg=1134.88, stdev=1023.73 00:13:12.501 lat (usec): min=80, max=14548, avg=1140.46, stdev=1024.99 00:13:12.501 clat percentiles (usec): 00:13:12.501 | 50.000th=[ 717], 99.000th=[ 4424], 99.900th=[ 6194], 99.990th=[ 8455], 00:13:12.501 | 99.999th=[14484] 00:13:12.501 write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(714MiB/10004msec); 0 zone resets 00:13:12.501 slat (usec): min=12, max=5389, avg=33.51, stdev=140.09 00:13:12.501 clat (usec): min=75, max=10091, avg=1224.25, stdev=1088.77 00:13:12.501 lat (usec): min=88, max=10120, avg=1257.76, stdev=1107.31 00:13:12.501 clat percentiles (usec): 00:13:12.501 | 50.000th=[ 742], 99.000th=[ 4686], 99.900th=[ 6390], 99.990th=[ 8029], 00:13:12.501 | 99.999th=[10028] 00:13:12.501 bw ( KiB/s): min=34479, max=188552, per=100.00%, avg=74836.37, stdev=7663.67, samples=114 00:13:12.501 iops : min= 8617, max=47138, avg=18708.11, stdev=1915.96, samples=114 00:13:12.501 lat (usec) : 100=0.13%, 250=11.65%, 500=26.47%, 750=12.45%, 1000=5.89% 00:13:12.501 lat (msec) : 2=22.84%, 4=18.62%, 10=1.95%, 20=0.01% 00:13:12.501 cpu : usr=46.07%, sys=31.85%, ctx=6057, majf=0, minf=17169 00:13:12.501 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:12.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.501 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.501 issued rwts: total=180377,182841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.501 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:12.501 00:13:12.501 Run status group 0 (all jobs): 00:13:12.501 READ: bw=70.4MiB/s (73.9MB/s), 70.4MiB/s-70.4MiB/s (73.9MB/s-73.9MB/s), io=705MiB (739MB), run=10004-10004msec 00:13:12.501 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=714MiB (749MB), run=10004-10004msec 00:13:12.501 ----------------------------------------------------- 00:13:12.501 Suppressions used: 00:13:12.501 count bytes template 00:13:12.501 6 48 /usr/src/fio/parse.c 00:13:12.501 2340 224640 /usr/src/fio/iolog.c 00:13:12.501 1 8 libtcmalloc_minimal.so 00:13:12.501 1 904 libcrypto.so 00:13:12.501 ----------------------------------------------------- 00:13:12.501 00:13:12.501 00:13:12.501 real 0m11.093s 00:13:12.501 user 0m28.318s 00:13:12.501 sys 0m19.409s 00:13:12.501 ************************************ 00:13:12.501 END TEST bdev_fio_rw_verify 00:13:12.501 ************************************ 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:12.501 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3d5af972-8ea5-426b-92db-80a38d8f366a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3d5af972-8ea5-426b-92db-80a38d8f366a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b4f211df-307b-43a8-ab3c-40d4786037ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4f211df-307b-43a8-ab3c-40d4786037ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a03a6771-a343-460c-834a-61f91147edd1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a03a6771-a343-460c-834a-61f91147edd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c5efb6b9-b866-4f62-9eff-dd79efba8aa3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5efb6b9-b866-4f62-9eff-dd79efba8aa3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "fe5987fe-ef1a-4456-9e06-e3f5338b1305"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe5987fe-ef1a-4456-9e06-e3f5338b1305",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a4db3df3-9d3b-4b84-8360-f9f0482c94df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a4db3df3-9d3b-4b84-8360-f9f0482c94df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:12.502 /home/vagrant/spdk_repo/spdk 00:13:12.502 ************************************ 00:13:12.502 END TEST bdev_fio 00:13:12.502 ************************************ 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:12.502 00:13:12.502 real 0m11.257s 00:13:12.502 user 0m28.389s 00:13:12.502 sys 0m19.483s 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.502 03:13:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 03:13:14 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:12.502 03:13:14 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:12.502 03:13:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:12.502 03:13:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.502 03:13:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 ************************************ 00:13:12.502 START TEST bdev_verify 00:13:12.502 ************************************ 00:13:12.502 03:13:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:12.502 [2024-11-18 03:13:14.702338] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:12.502 [2024-11-18 03:13:14.702490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82216 ] 00:13:12.502 [2024-11-18 03:13:14.855267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:12.502 [2024-11-18 03:13:14.908917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.502 [2024-11-18 03:13:14.908986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.502 Running I/O for 5 seconds... 00:13:14.150 23136.00 IOPS, 90.38 MiB/s [2024-11-18T03:13:18.671Z] 23312.00 IOPS, 91.06 MiB/s [2024-11-18T03:13:19.615Z] 22933.33 IOPS, 89.58 MiB/s [2024-11-18T03:13:20.557Z] 23216.00 IOPS, 90.69 MiB/s [2024-11-18T03:13:20.557Z] 23200.00 IOPS, 90.62 MiB/s 00:13:16.980 Latency(us) 00:13:16.980 [2024-11-18T03:13:20.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.980 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.980 Verification LBA range: start 0x0 length 0xa0000 00:13:16.980 nvme0n1 : 5.03 1756.04 6.86 0.00 0.00 72759.76 10485.76 70173.93 00:13:16.980 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.980 Verification LBA range: start 0xa0000 length 0xa0000 00:13:16.980 nvme0n1 : 5.02 1580.52 6.17 0.00 0.00 80847.14 8418.86 118569.75 00:13:16.980 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.980 Verification LBA range: start 0x0 length 0xbd0bd 00:13:16.980 nvme1n1 : 5.06 2144.74 8.38 0.00 0.00 59415.36 7259.37 62511.26 00:13:16.980 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.980 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:16.980 nvme1n1 : 5.03 2405.06 9.39 0.00 0.00 53004.17 5923.45 62511.26 00:13:16.980 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.980 Verification LBA range: start 0x0 length 0x80000 00:13:16.980 nvme2n1 : 5.07 1920.51 7.50 0.00 0.00 66323.41 7864.32 62511.26 00:13:16.981 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x80000 length 0x80000 00:13:16.981 nvme2n1 : 5.04 1904.14 7.44 0.00 0.00 66783.69 8771.74 72997.02 00:13:16.981 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x0 length 0x80000 00:13:16.981 nvme2n2 : 5.06 1846.73 7.21 0.00 0.00 68620.48 9779.99 64931.05 00:13:16.981 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x80000 length 0x80000 00:13:16.981 nvme2n2 : 5.05 1901.94 7.43 0.00 0.00 66749.89 4159.02 66947.54 00:13:16.981 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x0 length 0x80000 00:13:16.981 nvme2n3 : 5.07 1843.02 7.20 0.00 0.00 68614.76 7864.32 61301.37 00:13:16.981 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x80000 length 0x80000 00:13:16.981 nvme2n3 : 5.05 1900.98 7.43 0.00 0.00 66642.60 4663.14 78643.20 00:13:16.981 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x0 length 0x20000 00:13:16.981 nvme3n1 : 5.07 1842.47 7.20 0.00 0.00 68549.09 5116.85 69367.34 00:13:16.981 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.981 Verification LBA range: start 0x20000 length 0x20000 00:13:16.981 nvme3n1 : 5.06 1922.89 7.51 0.00 0.00 65759.83 1235.10 82676.18 00:13:16.981 [2024-11-18T03:13:20.558Z] =================================================================================================================== 00:13:16.981 [2024-11-18T03:13:20.558Z] Total : 22969.05 89.72 0.00 0.00 66373.62 1235.10 118569.75 00:13:16.981 00:13:16.981 real 0m5.831s 00:13:16.981 user 0m9.296s 00:13:16.981 sys 0m1.420s 00:13:16.981 03:13:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.981 ************************************ 00:13:16.981 END TEST bdev_verify 00:13:16.981 ************************************ 00:13:16.981 03:13:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:16.981 03:13:20 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:16.981 03:13:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:16.981 03:13:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.981 03:13:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.981 ************************************ 00:13:16.981 START TEST bdev_verify_big_io 00:13:16.981 ************************************ 00:13:16.981 03:13:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:17.241 [2024-11-18 03:13:20.597835] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:17.241 [2024-11-18 03:13:20.597999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82298 ] 00:13:17.241 [2024-11-18 03:13:20.750595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:17.241 [2024-11-18 03:13:20.804023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.242 [2024-11-18 03:13:20.804076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.503 Running I/O for 5 seconds... 00:13:23.355 960.00 IOPS, 60.00 MiB/s [2024-11-18T03:13:27.193Z] 2560.50 IOPS, 160.03 MiB/s [2024-11-18T03:13:27.193Z] 2968.33 IOPS, 185.52 MiB/s 00:13:23.616 Latency(us) 00:13:23.616 [2024-11-18T03:13:27.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.616 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x0 length 0xa000 00:13:23.616 nvme0n1 : 5.91 113.64 7.10 0.00 0.00 1070792.22 106470.79 2051982.57 00:13:23.616 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0xa000 length 0xa000 00:13:23.616 nvme0n1 : 5.81 126.58 7.91 0.00 0.00 943133.73 115343.36 1116330.14 00:13:23.616 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x0 length 0xbd0b 00:13:23.616 nvme1n1 : 5.83 164.96 10.31 0.00 0.00 720996.82 34885.32 1309913.40 00:13:23.616 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:23.616 nvme1n1 : 5.82 148.34 9.27 0.00 0.00 805534.65 21475.64 1948738.17 00:13:23.616 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x0 length 0x8000 00:13:23.616 nvme2n1 : 5.83 117.96 7.37 0.00 0.00 965083.55 62914.56 871124.68 00:13:23.616 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x8000 length 0x8000 00:13:23.616 nvme2n1 : 6.03 92.88 5.81 0.00 0.00 1246567.46 170191.95 2284282.49 00:13:23.616 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x0 length 0x8000 00:13:23.616 nvme2n2 : 5.92 129.76 8.11 0.00 0.00 874324.02 72593.72 1084066.26 00:13:23.616 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x8000 length 0x8000 00:13:23.616 nvme2n2 : 6.02 180.85 11.30 0.00 0.00 623045.85 18652.55 706578.90 00:13:23.616 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x0 length 0x8000 00:13:23.616 nvme2n3 : 6.00 165.25 10.33 0.00 0.00 659991.43 24399.56 1910021.51 00:13:23.616 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x8000 length 0x8000 00:13:23.616 nvme2n3 : 6.03 146.00 9.12 0.00 0.00 760610.64 7208.96 1639004.95 00:13:23.616 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x0 length 0x2000 00:13:23.616 nvme3n1 : 6.01 114.42 7.15 0.00 0.00 936776.14 2268.55 2606921.26 00:13:23.616 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:23.616 Verification LBA range: start 0x2000 length 0x2000 00:13:23.616 nvme3n1 : 6.02 114.19 7.14 0.00 0.00 941517.40 13812.97 2284282.49 00:13:23.616 [2024-11-18T03:13:27.193Z] =================================================================================================================== 00:13:23.616 [2024-11-18T03:13:27.193Z] Total : 1614.82 100.93 0.00 0.00 848231.98 2268.55 2606921.26 00:13:23.878 00:13:23.878 real 0m6.838s 00:13:23.878 user 0m12.502s 00:13:23.878 sys 0m0.463s 00:13:23.878 03:13:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.878 ************************************ 00:13:23.878 03:13:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.878 END TEST bdev_verify_big_io 00:13:23.878 ************************************ 00:13:23.878 03:13:27 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.878 03:13:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:23.878 03:13:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.878 03:13:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.878 ************************************ 00:13:23.878 START TEST bdev_write_zeroes 00:13:23.878 ************************************ 00:13:23.878 03:13:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:24.140 [2024-11-18 03:13:27.503389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:24.140 [2024-11-18 03:13:27.503542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82397 ] 00:13:24.140 [2024-11-18 03:13:27.653161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.140 [2024-11-18 03:13:27.704330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.402 Running I/O for 1 seconds... 00:13:25.800 90944.00 IOPS, 355.25 MiB/s 00:13:25.800 Latency(us) 00:13:25.800 [2024-11-18T03:13:29.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.800 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:25.800 nvme0n1 : 1.01 14903.61 58.22 0.00 0.00 8577.65 5898.24 22181.42 00:13:25.800 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:25.800 nvme1n1 : 1.02 15988.34 62.45 0.00 0.00 7989.07 2772.68 19660.80 00:13:25.800 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:25.800 nvme2n1 : 1.01 14883.88 58.14 0.00 0.00 8557.16 5948.65 20467.40 00:13:25.800 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:25.800 nvme2n2 : 1.02 14897.94 58.20 0.00 0.00 8511.57 5343.70 19257.50 00:13:25.800 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:25.800 nvme2n3 : 1.02 14853.83 58.02 0.00 0.00 8528.71 5343.70 18551.73 00:13:25.800 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:25.800 nvme3n1 : 1.02 14836.76 57.96 0.00 0.00 8530.70 5419.32 18551.73 00:13:25.800 [2024-11-18T03:13:29.377Z] =================================================================================================================== 00:13:25.800 [2024-11-18T03:13:29.377Z] Total : 90364.36 352.99 0.00 0.00 8443.15 2772.68 22181.42 00:13:25.800 00:13:25.800 real 0m1.742s 00:13:25.800 user 0m1.094s 00:13:25.800 sys 0m0.468s 00:13:25.800 03:13:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.800 ************************************ 00:13:25.800 END TEST bdev_write_zeroes 00:13:25.800 03:13:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:25.800 ************************************ 00:13:25.800 03:13:29 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:25.800 03:13:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:25.800 03:13:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.800 03:13:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.800 ************************************ 00:13:25.800 START TEST bdev_json_nonenclosed 00:13:25.800 ************************************ 00:13:25.800 03:13:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:25.800 [2024-11-18 03:13:29.315952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:25.800 [2024-11-18 03:13:29.316097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82439 ] 00:13:26.066 [2024-11-18 03:13:29.465456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.066 [2024-11-18 03:13:29.518086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.066 [2024-11-18 03:13:29.518208] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:26.066 [2024-11-18 03:13:29.518229] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:26.066 [2024-11-18 03:13:29.518242] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:26.066 00:13:26.066 real 0m0.377s 00:13:26.066 user 0m0.151s 00:13:26.066 sys 0m0.121s 00:13:26.066 ************************************ 00:13:26.066 END TEST bdev_json_nonenclosed 00:13:26.066 03:13:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.066 03:13:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:26.066 ************************************ 00:13:26.328 03:13:29 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:26.328 03:13:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:26.328 03:13:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.328 03:13:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.328 ************************************ 00:13:26.328 START TEST bdev_json_nonarray 00:13:26.328 ************************************ 00:13:26.328 03:13:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:26.328 [2024-11-18 03:13:29.763701] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:26.328 [2024-11-18 03:13:29.763849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82459 ] 00:13:26.591 [2024-11-18 03:13:29.915336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.591 [2024-11-18 03:13:29.967759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.591 [2024-11-18 03:13:29.967898] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:26.591 [2024-11-18 03:13:29.967916] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:26.591 [2024-11-18 03:13:29.967929] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:26.591 00:13:26.591 real 0m0.385s 00:13:26.591 user 0m0.155s 00:13:26.591 sys 0m0.125s 00:13:26.591 03:13:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.591 ************************************ 00:13:26.591 END TEST bdev_json_nonarray 00:13:26.591 ************************************ 00:13:26.591 03:13:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:26.591 03:13:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:27.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:31.373 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.373 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.373 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.500 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.500 00:13:39.500 real 0m57.923s 00:13:39.500 user 1m17.491s 00:13:39.500 sys 0m50.880s 00:13:39.500 03:13:41 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.500 ************************************ 00:13:39.500 END TEST blockdev_xnvme 00:13:39.500 ************************************ 00:13:39.500 03:13:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.500 03:13:41 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:39.500 03:13:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.500 03:13:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.500 03:13:41 -- common/autotest_common.sh@10 -- # set +x 00:13:39.500 ************************************ 00:13:39.500 START TEST ublk 00:13:39.500 ************************************ 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:39.500 * Looking for test storage... 00:13:39.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.500 03:13:42 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.500 03:13:42 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.500 03:13:42 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.500 03:13:42 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.500 03:13:42 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.500 03:13:42 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:39.500 03:13:42 ublk -- scripts/common.sh@345 -- # : 1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.500 03:13:42 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.500 03:13:42 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@353 -- # local d=1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.500 03:13:42 ublk -- scripts/common.sh@355 -- # echo 1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.500 03:13:42 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@353 -- # local d=2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.500 03:13:42 ublk -- scripts/common.sh@355 -- # echo 2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.500 03:13:42 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.500 03:13:42 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.500 03:13:42 ublk -- scripts/common.sh@368 -- # return 0 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.500 --rc genhtml_branch_coverage=1 00:13:39.500 --rc genhtml_function_coverage=1 00:13:39.500 --rc genhtml_legend=1 00:13:39.500 --rc geninfo_all_blocks=1 00:13:39.500 --rc geninfo_unexecuted_blocks=1 00:13:39.500 00:13:39.500 ' 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.500 --rc genhtml_branch_coverage=1 00:13:39.500 --rc genhtml_function_coverage=1 00:13:39.500 --rc genhtml_legend=1 00:13:39.500 --rc geninfo_all_blocks=1 00:13:39.500 --rc geninfo_unexecuted_blocks=1 00:13:39.500 00:13:39.500 ' 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.500 --rc genhtml_branch_coverage=1 00:13:39.500 --rc genhtml_function_coverage=1 00:13:39.500 --rc genhtml_legend=1 00:13:39.500 --rc geninfo_all_blocks=1 00:13:39.500 --rc geninfo_unexecuted_blocks=1 00:13:39.500 00:13:39.500 ' 00:13:39.500 03:13:42 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.500 --rc genhtml_branch_coverage=1 00:13:39.500 --rc genhtml_function_coverage=1 00:13:39.500 --rc genhtml_legend=1 00:13:39.500 --rc geninfo_all_blocks=1 00:13:39.500 --rc geninfo_unexecuted_blocks=1 00:13:39.500 00:13:39.500 ' 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:39.500 03:13:42 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:39.500 03:13:42 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:39.500 03:13:42 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:39.500 03:13:42 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:39.500 03:13:42 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:39.500 03:13:42 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:39.500 03:13:42 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:39.500 03:13:42 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:39.500 03:13:42 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:39.501 03:13:42 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:39.501 03:13:42 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:39.501 03:13:42 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.501 03:13:42 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.501 03:13:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:39.501 ************************************ 00:13:39.501 START TEST test_save_ublk_config 00:13:39.501 ************************************ 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82759 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82759 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:39.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82759 ']' 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.501 03:13:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:39.501 [2024-11-18 03:13:42.257671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:39.501 [2024-11-18 03:13:42.257805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82759 ] 00:13:39.501 [2024-11-18 03:13:42.406238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.501 [2024-11-18 03:13:42.462936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.759 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.759 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:39.759 03:13:43 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:39.759 03:13:43 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:39.759 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.759 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:39.759 [2024-11-18 03:13:43.182330] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:39.759 [2024-11-18 03:13:43.182601] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:39.759 malloc0 00:13:39.759 [2024-11-18 03:13:43.206451] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:39.759 [2024-11-18 03:13:43.206536] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:39.760 [2024-11-18 03:13:43.206544] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:39.760 [2024-11-18 03:13:43.206554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:39.760 [2024-11-18 03:13:43.215400] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:39.760 [2024-11-18 03:13:43.215422] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:39.760 [2024-11-18 03:13:43.222334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:39.760 [2024-11-18 03:13:43.222432] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:39.760 [2024-11-18 03:13:43.239334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:39.760 0 00:13:39.760 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.760 03:13:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:39.760 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.760 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:40.019 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.019 03:13:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:40.019 "subsystems": [ 00:13:40.019 { 00:13:40.019 "subsystem": "fsdev", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "fsdev_set_opts", 00:13:40.019 "params": { 00:13:40.019 "fsdev_io_pool_size": 65535, 00:13:40.019 "fsdev_io_cache_size": 256 00:13:40.019 } 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "keyring", 00:13:40.019 "config": [] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "iobuf", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "iobuf_set_options", 00:13:40.019 "params": { 00:13:40.019 "small_pool_count": 8192, 00:13:40.019 "large_pool_count": 1024, 00:13:40.019 "small_bufsize": 8192, 00:13:40.019 "large_bufsize": 135168 00:13:40.019 } 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "sock", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "sock_set_default_impl", 00:13:40.019 "params": { 00:13:40.019 "impl_name": "posix" 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "sock_impl_set_options", 00:13:40.019 "params": { 00:13:40.019 "impl_name": "ssl", 00:13:40.019 "recv_buf_size": 4096, 00:13:40.019 "send_buf_size": 4096, 00:13:40.019 "enable_recv_pipe": true, 00:13:40.019 "enable_quickack": false, 00:13:40.019 "enable_placement_id": 0, 00:13:40.019 "enable_zerocopy_send_server": true, 00:13:40.019 "enable_zerocopy_send_client": false, 00:13:40.019 "zerocopy_threshold": 0, 00:13:40.019 "tls_version": 0, 00:13:40.019 "enable_ktls": false 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "sock_impl_set_options", 00:13:40.019 "params": { 00:13:40.019 "impl_name": "posix", 00:13:40.019 "recv_buf_size": 2097152, 00:13:40.019 "send_buf_size": 2097152, 00:13:40.019 "enable_recv_pipe": true, 00:13:40.019 "enable_quickack": false, 00:13:40.019 "enable_placement_id": 0, 00:13:40.019 "enable_zerocopy_send_server": true, 00:13:40.019 "enable_zerocopy_send_client": false, 00:13:40.019 "zerocopy_threshold": 0, 00:13:40.019 "tls_version": 0, 00:13:40.019 "enable_ktls": false 00:13:40.019 } 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "vmd", 00:13:40.019 "config": [] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "accel", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "accel_set_options", 00:13:40.019 "params": { 00:13:40.019 "small_cache_size": 128, 00:13:40.019 "large_cache_size": 16, 00:13:40.019 "task_count": 2048, 00:13:40.019 "sequence_count": 2048, 00:13:40.019 "buf_count": 2048 00:13:40.019 } 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "bdev", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "bdev_set_options", 00:13:40.019 "params": { 00:13:40.019 "bdev_io_pool_size": 65535, 00:13:40.019 "bdev_io_cache_size": 256, 00:13:40.019 "bdev_auto_examine": true, 00:13:40.019 "iobuf_small_cache_size": 128, 00:13:40.019 "iobuf_large_cache_size": 16 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "bdev_raid_set_options", 00:13:40.019 "params": { 00:13:40.019 "process_window_size_kb": 1024, 00:13:40.019 "process_max_bandwidth_mb_sec": 0 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "bdev_iscsi_set_options", 00:13:40.019 "params": { 00:13:40.019 "timeout_sec": 30 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "bdev_nvme_set_options", 00:13:40.019 "params": { 00:13:40.019 "action_on_timeout": "none", 00:13:40.019 "timeout_us": 0, 00:13:40.019 "timeout_admin_us": 0, 00:13:40.019 "keep_alive_timeout_ms": 10000, 00:13:40.019 "arbitration_burst": 0, 00:13:40.019 "low_priority_weight": 0, 00:13:40.019 "medium_priority_weight": 0, 00:13:40.019 "high_priority_weight": 0, 00:13:40.019 "nvme_adminq_poll_period_us": 10000, 00:13:40.019 "nvme_ioq_poll_period_us": 0, 00:13:40.019 "io_queue_requests": 0, 00:13:40.019 "delay_cmd_submit": true, 00:13:40.019 "transport_retry_count": 4, 00:13:40.019 "bdev_retry_count": 3, 00:13:40.019 "transport_ack_timeout": 0, 00:13:40.019 "ctrlr_loss_timeout_sec": 0, 00:13:40.019 "reconnect_delay_sec": 0, 00:13:40.019 "fast_io_fail_timeout_sec": 0, 00:13:40.019 "disable_auto_failback": false, 00:13:40.019 "generate_uuids": false, 00:13:40.019 "transport_tos": 0, 00:13:40.019 "nvme_error_stat": false, 00:13:40.019 "rdma_srq_size": 0, 00:13:40.019 "io_path_stat": false, 00:13:40.019 "allow_accel_sequence": false, 00:13:40.019 "rdma_max_cq_size": 0, 00:13:40.019 "rdma_cm_event_timeout_ms": 0, 00:13:40.019 "dhchap_digests": [ 00:13:40.019 "sha256", 00:13:40.019 "sha384", 00:13:40.019 "sha512" 00:13:40.019 ], 00:13:40.019 "dhchap_dhgroups": [ 00:13:40.019 "null", 00:13:40.019 "ffdhe2048", 00:13:40.019 "ffdhe3072", 00:13:40.019 "ffdhe4096", 00:13:40.019 "ffdhe6144", 00:13:40.019 "ffdhe8192" 00:13:40.019 ] 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "bdev_nvme_set_hotplug", 00:13:40.019 "params": { 00:13:40.019 "period_us": 100000, 00:13:40.019 "enable": false 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "bdev_malloc_create", 00:13:40.019 "params": { 00:13:40.019 "name": "malloc0", 00:13:40.019 "num_blocks": 8192, 00:13:40.019 "block_size": 4096, 00:13:40.019 "physical_block_size": 4096, 00:13:40.019 "uuid": "7a11ecda-a489-4f8e-9436-9574a36108c8", 00:13:40.019 "optimal_io_boundary": 0, 00:13:40.019 "md_size": 0, 00:13:40.019 "dif_type": 0, 00:13:40.019 "dif_is_head_of_md": false, 00:13:40.019 "dif_pi_format": 0 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "bdev_wait_for_examine" 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "scsi", 00:13:40.019 "config": null 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "scheduler", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "framework_set_scheduler", 00:13:40.019 "params": { 00:13:40.019 "name": "static" 00:13:40.019 } 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "vhost_scsi", 00:13:40.019 "config": [] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "vhost_blk", 00:13:40.019 "config": [] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "ublk", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "ublk_create_target", 00:13:40.019 "params": { 00:13:40.019 "cpumask": "1" 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "ublk_start_disk", 00:13:40.019 "params": { 00:13:40.019 "bdev_name": "malloc0", 00:13:40.019 "ublk_id": 0, 00:13:40.019 "num_queues": 1, 00:13:40.019 "queue_depth": 128 00:13:40.019 } 00:13:40.019 } 00:13:40.019 ] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "nbd", 00:13:40.019 "config": [] 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "subsystem": "nvmf", 00:13:40.019 "config": [ 00:13:40.019 { 00:13:40.019 "method": "nvmf_set_config", 00:13:40.019 "params": { 00:13:40.019 "discovery_filter": "match_any", 00:13:40.019 "admin_cmd_passthru": { 00:13:40.019 "identify_ctrlr": false 00:13:40.019 }, 00:13:40.019 "dhchap_digests": [ 00:13:40.019 "sha256", 00:13:40.019 "sha384", 00:13:40.019 "sha512" 00:13:40.019 ], 00:13:40.019 "dhchap_dhgroups": [ 00:13:40.019 "null", 00:13:40.019 "ffdhe2048", 00:13:40.019 "ffdhe3072", 00:13:40.019 "ffdhe4096", 00:13:40.019 "ffdhe6144", 00:13:40.019 "ffdhe8192" 00:13:40.019 ] 00:13:40.019 } 00:13:40.019 }, 00:13:40.019 { 00:13:40.019 "method": "nvmf_set_max_subsystems", 00:13:40.019 "params": { 00:13:40.019 "max_subsystems": 1024 00:13:40.019 } 00:13:40.019 }, 00:13:40.020 { 00:13:40.020 "method": "nvmf_set_crdt", 00:13:40.020 "params": { 00:13:40.020 "crdt1": 0, 00:13:40.020 "crdt2": 0, 00:13:40.020 "crdt3": 0 00:13:40.020 } 00:13:40.020 } 00:13:40.020 ] 00:13:40.020 }, 00:13:40.020 { 00:13:40.020 "subsystem": "iscsi", 00:13:40.020 "config": [ 00:13:40.020 { 00:13:40.020 "method": "iscsi_set_options", 00:13:40.020 "params": { 00:13:40.020 "node_base": "iqn.2016-06.io.spdk", 00:13:40.020 "max_sessions": 128, 00:13:40.020 "max_connections_per_session": 2, 00:13:40.020 "max_queue_depth": 64, 00:13:40.020 "default_time2wait": 2, 00:13:40.020 "default_time2retain": 20, 00:13:40.020 "first_burst_length": 8192, 00:13:40.020 "immediate_data": true, 00:13:40.020 "allow_duplicated_isid": false, 00:13:40.020 "error_recovery_level": 0, 00:13:40.020 "nop_timeout": 60, 00:13:40.020 "nop_in_interval": 30, 00:13:40.020 "disable_chap": false, 00:13:40.020 "require_chap": false, 00:13:40.020 "mutual_chap": false, 00:13:40.020 "chap_group": 0, 00:13:40.020 "max_large_datain_per_connection": 64, 00:13:40.020 "max_r2t_per_connection": 4, 00:13:40.020 "pdu_pool_size": 36864, 00:13:40.020 "immediate_data_pool_size": 16384, 00:13:40.020 "data_out_pool_size": 2048 00:13:40.020 } 00:13:40.020 } 00:13:40.020 ] 00:13:40.020 } 00:13:40.020 ] 00:13:40.020 }' 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82759 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82759 ']' 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82759 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82759 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.020 killing process with pid 82759 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82759' 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82759 00:13:40.020 03:13:43 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82759 00:13:40.279 [2024-11-18 03:13:43.735019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.279 [2024-11-18 03:13:43.772411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.279 [2024-11-18 03:13:43.772536] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.279 [2024-11-18 03:13:43.781345] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.279 [2024-11-18 03:13:43.781395] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:40.279 [2024-11-18 03:13:43.781403] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:40.279 [2024-11-18 03:13:43.781427] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:40.279 [2024-11-18 03:13:43.781559] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:40.846 03:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82797 00:13:40.846 03:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82797 00:13:40.846 03:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:40.846 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82797 ']' 00:13:40.846 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.846 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.847 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.847 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.847 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:40.847 03:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:40.847 "subsystems": [ 00:13:40.847 { 00:13:40.847 "subsystem": "fsdev", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "fsdev_set_opts", 00:13:40.847 "params": { 00:13:40.847 "fsdev_io_pool_size": 65535, 00:13:40.847 "fsdev_io_cache_size": 256 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "keyring", 00:13:40.847 "config": [] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "iobuf", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "iobuf_set_options", 00:13:40.847 "params": { 00:13:40.847 "small_pool_count": 8192, 00:13:40.847 "large_pool_count": 1024, 00:13:40.847 "small_bufsize": 8192, 00:13:40.847 "large_bufsize": 135168 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "sock", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "sock_set_default_impl", 00:13:40.847 "params": { 00:13:40.847 "impl_name": "posix" 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "sock_impl_set_options", 00:13:40.847 "params": { 00:13:40.847 "impl_name": "ssl", 00:13:40.847 "recv_buf_size": 4096, 00:13:40.847 "send_buf_size": 4096, 00:13:40.847 "enable_recv_pipe": true, 00:13:40.847 "enable_quickack": false, 00:13:40.847 "enable_placement_id": 0, 00:13:40.847 "enable_zerocopy_send_server": true, 00:13:40.847 "enable_zerocopy_send_client": false, 00:13:40.847 "zerocopy_threshold": 0, 00:13:40.847 "tls_version": 0, 00:13:40.847 "enable_ktls": false 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "sock_impl_set_options", 00:13:40.847 "params": { 00:13:40.847 "impl_name": "posix", 00:13:40.847 "recv_buf_size": 2097152, 00:13:40.847 "send_buf_size": 2097152, 00:13:40.847 "enable_recv_pipe": true, 00:13:40.847 "enable_quickack": false, 00:13:40.847 "enable_placement_id": 0, 00:13:40.847 "enable_zerocopy_send_server": true, 00:13:40.847 "enable_zerocopy_send_client": false, 00:13:40.847 "zerocopy_threshold": 0, 00:13:40.847 "tls_version": 0, 00:13:40.847 "enable_ktls": false 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "vmd", 00:13:40.847 "config": [] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "accel", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "accel_set_options", 00:13:40.847 "params": { 00:13:40.847 "small_cache_size": 128, 00:13:40.847 "large_cache_size": 16, 00:13:40.847 "task_count": 2048, 00:13:40.847 "sequence_count": 2048, 00:13:40.847 "buf_count": 2048 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "bdev", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "bdev_set_options", 00:13:40.847 "params": { 00:13:40.847 "bdev_io_pool_size": 65535, 00:13:40.847 "bdev_io_cache_size": 256, 00:13:40.847 "bdev_auto_examine": true, 00:13:40.847 "iobuf_small_cache_size": 128, 00:13:40.847 "iobuf_large_cache_size": 16 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "bdev_raid_set_options", 00:13:40.847 "params": { 00:13:40.847 "process_window_size_kb": 1024, 00:13:40.847 "process_max_bandwidth_mb_sec": 0 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "bdev_iscsi_set_options", 00:13:40.847 "params": { 00:13:40.847 "timeout_sec": 30 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "bdev_nvme_set_options", 00:13:40.847 "params": { 00:13:40.847 "action_on_timeout": "none", 00:13:40.847 "timeout_us": 0, 00:13:40.847 "timeout_admin_us": 0, 00:13:40.847 "keep_alive_timeout_ms": 10000, 00:13:40.847 "arbitration_burst": 0, 00:13:40.847 "low_priority_weight": 0, 00:13:40.847 "medium_priority_weight": 0, 00:13:40.847 "high_priority_weight": 0, 00:13:40.847 "nvme_adminq_poll_period_us": 10000, 00:13:40.847 "nvme_ioq_poll_period_us": 0, 00:13:40.847 "io_queue_requests": 0, 00:13:40.847 "delay_cmd_submit": true, 00:13:40.847 "transport_retry_count": 4, 00:13:40.847 "bdev_retry_count": 3, 00:13:40.847 "transport_ack_timeout": 0, 00:13:40.847 "ctrlr_loss_timeout_sec": 0, 00:13:40.847 "reconnect_delay_sec": 0, 00:13:40.847 "fast_io_fail_timeout_sec": 0, 00:13:40.847 "disable_auto_failback": false, 00:13:40.847 "generate_uuids": false, 00:13:40.847 "transport_tos": 0, 00:13:40.847 "nvme_error_stat": false, 00:13:40.847 "rdma_srq_size": 0, 00:13:40.847 "io_path_stat": false, 00:13:40.847 "allow_accel_sequence": false, 00:13:40.847 "rdma_max_cq_size": 0, 00:13:40.847 "rdma_cm_event_timeout_ms": 0, 00:13:40.847 "dhchap_digests": [ 00:13:40.847 "sha256", 00:13:40.847 "sha384", 00:13:40.847 "sha512" 00:13:40.847 ], 00:13:40.847 "dhchap_dhgroups": [ 00:13:40.847 "null", 00:13:40.847 "ffdhe2048", 00:13:40.847 "ffdhe3072", 00:13:40.847 "ffdhe4096", 00:13:40.847 "ffdhe6144", 00:13:40.847 "ffdhe8192" 00:13:40.847 ] 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "bdev_nvme_set_hotplug", 00:13:40.847 "params": { 00:13:40.847 "period_us": 100000, 00:13:40.847 "enable": false 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "bdev_malloc_create", 00:13:40.847 "params": { 00:13:40.847 "name": "malloc0", 00:13:40.847 "num_blocks": 8192, 00:13:40.847 "block_size": 4096, 00:13:40.847 "physical_block_size": 4096, 00:13:40.847 "uuid": "7a11ecda-a489-4f8e-9436-9574a36108c8", 00:13:40.847 "optimal_io_boundary": 0, 00:13:40.847 "md_size": 0, 00:13:40.847 "dif_type": 0, 00:13:40.847 "dif_is_head_of_md": false, 00:13:40.847 "dif_pi_format": 0 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "bdev_wait_for_examine" 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "scsi", 00:13:40.847 "config": null 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "scheduler", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "framework_set_scheduler", 00:13:40.847 "params": { 00:13:40.847 "name": "static" 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "vhost_scsi", 00:13:40.847 "config": [] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "vhost_blk", 00:13:40.847 "config": [] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "ublk", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "ublk_create_target", 00:13:40.847 "params": { 00:13:40.847 "cpumask": "1" 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "ublk_start_disk", 00:13:40.847 "params": { 00:13:40.847 "bdev_name": "malloc0", 00:13:40.847 "ublk_id": 0, 00:13:40.847 "num_queues": 1, 00:13:40.847 "queue_depth": 128 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "nbd", 00:13:40.847 "config": [] 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "subsystem": "nvmf", 00:13:40.847 "config": [ 00:13:40.847 { 00:13:40.847 "method": "nvmf_set_config", 00:13:40.847 "params": { 00:13:40.847 "discovery_filter": "match_any", 00:13:40.847 "admin_cmd_passthru": { 00:13:40.847 "identify_ctrlr": false 00:13:40.847 }, 00:13:40.847 "dhchap_digests": [ 00:13:40.847 "sha256", 00:13:40.847 "sha384", 00:13:40.847 "sha512" 00:13:40.847 ], 00:13:40.847 "dhchap_dhgroups": [ 00:13:40.847 "null", 00:13:40.847 "ffdhe2048", 00:13:40.847 "ffdhe3072", 00:13:40.847 "ffdhe4096", 00:13:40.847 "ffdhe6144", 00:13:40.847 "ffdhe8192" 00:13:40.847 ] 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "nvmf_set_max_subsystems", 00:13:40.847 "params": { 00:13:40.847 "max_subsystems": 1024 00:13:40.847 } 00:13:40.847 }, 00:13:40.847 { 00:13:40.847 "method": "nvmf_set_crdt", 00:13:40.847 "params": { 00:13:40.847 "crdt1": 0, 00:13:40.847 "crdt2": 0, 00:13:40.847 "crdt3": 0 00:13:40.847 } 00:13:40.847 } 00:13:40.847 ] 00:13:40.847 }, 00:13:40.848 { 00:13:40.848 "subsystem": "iscsi", 00:13:40.848 "config": [ 00:13:40.848 { 00:13:40.848 "method": "iscsi_set_options", 00:13:40.848 "params": { 00:13:40.848 "node_base": "iqn.2016-06.io.spdk", 00:13:40.848 "max_sessions": 128, 00:13:40.848 "max_connections_per_session": 2, 00:13:40.848 "max_queue_depth": 64, 00:13:40.848 "default_time2wait": 2, 00:13:40.848 "default_time2retain": 20, 00:13:40.848 "first_burst_length": 8192, 00:13:40.848 "immediate_data": true, 00:13:40.848 "allow_duplicated_isid": false, 00:13:40.848 "error_recovery_level": 0, 00:13:40.848 "nop_timeout": 60, 00:13:40.848 "nop_in_interval": 30, 00:13:40.848 "disable_chap": false, 00:13:40.848 "require_chap": false, 00:13:40.848 "mutual_chap": false, 00:13:40.848 "chap_group": 0, 00:13:40.848 "max_large_datain_per_connection": 64, 00:13:40.848 "max_r2t_per_connection": 4, 00:13:40.848 "pdu_pool_size": 36864, 00:13:40.848 "immediate_data_pool_size": 16384, 00:13:40.848 "data_out_pool_size": 2048 00:13:40.848 } 00:13:40.848 } 00:13:40.848 ] 00:13:40.848 } 00:13:40.848 ] 00:13:40.848 }' 00:13:40.848 [2024-11-18 03:13:44.177233] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:40.848 [2024-11-18 03:13:44.177343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82797 ] 00:13:40.848 [2024-11-18 03:13:44.310931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.848 [2024-11-18 03:13:44.355974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.106 [2024-11-18 03:13:44.655330] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:41.106 [2024-11-18 03:13:44.655593] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:41.106 [2024-11-18 03:13:44.663449] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:41.106 [2024-11-18 03:13:44.663524] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:41.106 [2024-11-18 03:13:44.663532] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:41.106 [2024-11-18 03:13:44.663539] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:41.106 [2024-11-18 03:13:44.672389] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:41.106 [2024-11-18 03:13:44.672410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:41.106 [2024-11-18 03:13:44.679336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:41.106 [2024-11-18 03:13:44.679421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:41.365 [2024-11-18 03:13:44.696340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:41.625 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.625 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:41.625 03:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:41.625 03:13:44 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:41.625 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.625 03:13:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82797 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82797 ']' 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82797 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82797 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.625 killing process with pid 82797 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82797' 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82797 00:13:41.625 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82797 00:13:41.884 [2024-11-18 03:13:45.218127] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:41.884 [2024-11-18 03:13:45.246407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:41.884 [2024-11-18 03:13:45.246525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:41.884 [2024-11-18 03:13:45.254340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:41.884 [2024-11-18 03:13:45.254392] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:41.884 [2024-11-18 03:13:45.254405] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:41.884 [2024-11-18 03:13:45.254434] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:41.884 [2024-11-18 03:13:45.254569] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:42.143 03:13:45 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:42.143 00:13:42.143 real 0m3.398s 00:13:42.143 user 0m2.504s 00:13:42.143 sys 0m1.568s 00:13:42.144 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.144 03:13:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:42.144 ************************************ 00:13:42.144 END TEST test_save_ublk_config 00:13:42.144 ************************************ 00:13:42.144 03:13:45 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82842 00:13:42.144 03:13:45 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.144 03:13:45 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:42.144 03:13:45 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82842 00:13:42.144 03:13:45 ublk -- common/autotest_common.sh@831 -- # '[' -z 82842 ']' 00:13:42.144 03:13:45 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.144 03:13:45 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.144 03:13:45 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.144 03:13:45 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.144 03:13:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.144 [2024-11-18 03:13:45.685697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:42.144 [2024-11-18 03:13:45.685819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82842 ] 00:13:42.403 [2024-11-18 03:13:45.834221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:42.403 [2024-11-18 03:13:45.866492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.403 [2024-11-18 03:13:45.866500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.969 03:13:46 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.969 03:13:46 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:42.969 03:13:46 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:42.969 03:13:46 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:42.969 03:13:46 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.969 03:13:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.969 ************************************ 00:13:42.969 START TEST test_create_ublk 00:13:42.969 ************************************ 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:42.969 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.969 [2024-11-18 03:13:46.489339] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:42.969 [2024-11-18 03:13:46.490455] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.969 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:42.969 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.969 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.969 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:43.229 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.229 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.229 [2024-11-18 03:13:46.552447] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:43.229 [2024-11-18 03:13:46.552823] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:43.229 [2024-11-18 03:13:46.552837] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:43.229 [2024-11-18 03:13:46.552846] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:43.229 [2024-11-18 03:13:46.561514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:43.229 [2024-11-18 03:13:46.561541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:43.229 [2024-11-18 03:13:46.568332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:43.229 [2024-11-18 03:13:46.568940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:43.229 [2024-11-18 03:13:46.582406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:43.229 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:43.229 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.229 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.229 03:13:46 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:43.229 { 00:13:43.229 "ublk_device": "/dev/ublkb0", 00:13:43.229 "id": 0, 00:13:43.229 "queue_depth": 512, 00:13:43.229 "num_queues": 4, 00:13:43.229 "bdev_name": "Malloc0" 00:13:43.229 } 00:13:43.229 ]' 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:43.229 03:13:46 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:43.230 03:13:46 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:43.488 fio: verification read phase will never start because write phase uses all of runtime 00:13:43.488 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:43.488 fio-3.35 00:13:43.488 Starting 1 process 00:13:53.476 00:13:53.476 fio_test: (groupid=0, jobs=1): err= 0: pid=82881: Mon Nov 18 03:13:56 2024 00:13:53.477 write: IOPS=21.0k, BW=82.1MiB/s (86.1MB/s)(821MiB/10001msec); 0 zone resets 00:13:53.477 clat (usec): min=32, max=3939, avg=46.76, stdev=78.80 00:13:53.477 lat (usec): min=32, max=3939, avg=47.23, stdev=78.83 00:13:53.477 clat percentiles (usec): 00:13:53.477 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 40], 00:13:53.477 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:13:53.477 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 51], 95.00th=[ 59], 00:13:53.477 | 99.00th=[ 67], 99.50th=[ 75], 99.90th=[ 1123], 99.95th=[ 2376], 00:13:53.477 | 99.99th=[ 3359] 00:13:53.477 bw ( KiB/s): min=78616, max=92360, per=100.00%, avg=84221.05, stdev=3555.99, samples=19 00:13:53.477 iops : min=19654, max=23090, avg=21055.26, stdev=889.00, samples=19 00:13:53.477 lat (usec) : 50=89.11%, 100=10.63%, 250=0.11%, 500=0.03%, 750=0.01% 00:13:53.477 lat (usec) : 1000=0.01% 00:13:53.477 lat (msec) : 2=0.04%, 4=0.07% 00:13:53.477 cpu : usr=3.34%, sys=16.27%, ctx=210199, majf=0, minf=795 00:13:53.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.477 issued rwts: total=0,210205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.477 00:13:53.477 Run status group 0 (all jobs): 00:13:53.477 WRITE: bw=82.1MiB/s (86.1MB/s), 82.1MiB/s-82.1MiB/s (86.1MB/s-86.1MB/s), io=821MiB (861MB), run=10001-10001msec 00:13:53.477 00:13:53.477 Disk stats (read/write): 00:13:53.477 ublkb0: ios=0/208191, merge=0/0, ticks=0/8076, in_queue=8076, util=99.07% 00:13:53.477 03:13:56 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:53.477 03:13:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.477 03:13:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.477 [2024-11-18 03:13:57.002702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:53.477 [2024-11-18 03:13:57.037364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:53.477 [2024-11-18 03:13:57.037926] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:53.477 [2024-11-18 03:13:57.045332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:53.477 [2024-11-18 03:13:57.045556] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:53.477 [2024-11-18 03:13:57.045563] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 [2024-11-18 03:13:57.061475] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:53.736 request: 00:13:53.736 { 00:13:53.736 "ublk_id": 0, 00:13:53.736 "method": "ublk_stop_disk", 00:13:53.736 "req_id": 1 00:13:53.736 } 00:13:53.736 Got JSON-RPC error response 00:13:53.736 response: 00:13:53.736 { 00:13:53.736 "code": -19, 00:13:53.736 "message": "No such device" 00:13:53.736 } 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.736 03:13:57 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 [2024-11-18 03:13:57.077388] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:53.736 [2024-11-18 03:13:57.078754] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:53.736 [2024-11-18 03:13:57.078784] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:53.736 ************************************ 00:13:53.736 END TEST test_create_ublk 00:13:53.736 ************************************ 00:13:53.736 03:13:57 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:53.736 00:13:53.736 real 0m10.752s 00:13:53.736 user 0m0.630s 00:13:53.736 sys 0m1.701s 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 03:13:57 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:53.736 03:13:57 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:53.736 03:13:57 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.736 03:13:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 ************************************ 00:13:53.736 START TEST test_create_multi_ublk 00:13:53.736 ************************************ 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.736 [2024-11-18 03:13:57.288329] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:53.736 [2024-11-18 03:13:57.289264] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.736 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.995 [2024-11-18 03:13:57.360439] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:53.995 [2024-11-18 03:13:57.360726] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:53.995 [2024-11-18 03:13:57.360739] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:53.995 [2024-11-18 03:13:57.360744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.995 [2024-11-18 03:13:57.384336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.995 [2024-11-18 03:13:57.384355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.995 [2024-11-18 03:13:57.396334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.995 [2024-11-18 03:13:57.396808] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.995 [2024-11-18 03:13:57.436340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.995 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.995 [2024-11-18 03:13:57.520425] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:53.995 [2024-11-18 03:13:57.520713] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:53.995 [2024-11-18 03:13:57.520724] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:53.995 [2024-11-18 03:13:57.520730] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.995 [2024-11-18 03:13:57.532340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.995 [2024-11-18 03:13:57.532360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.995 [2024-11-18 03:13:57.544333] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.995 [2024-11-18 03:13:57.544818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:54.254 [2024-11-18 03:13:57.580336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.254 [2024-11-18 03:13:57.664430] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:54.254 [2024-11-18 03:13:57.664715] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:54.254 [2024-11-18 03:13:57.664726] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:54.254 [2024-11-18 03:13:57.664732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:54.254 [2024-11-18 03:13:57.676341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:54.254 [2024-11-18 03:13:57.676359] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:54.254 [2024-11-18 03:13:57.688343] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:54.254 [2024-11-18 03:13:57.688820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:54.254 [2024-11-18 03:13:57.720335] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.254 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.254 [2024-11-18 03:13:57.804416] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:54.254 [2024-11-18 03:13:57.804705] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:54.254 [2024-11-18 03:13:57.804715] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:54.254 [2024-11-18 03:13:57.804721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:54.254 [2024-11-18 03:13:57.816348] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:54.254 [2024-11-18 03:13:57.816370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:54.513 [2024-11-18 03:13:57.828327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:54.513 [2024-11-18 03:13:57.828801] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:54.513 [2024-11-18 03:13:57.852336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:54.513 { 00:13:54.513 "ublk_device": "/dev/ublkb0", 00:13:54.513 "id": 0, 00:13:54.513 "queue_depth": 512, 00:13:54.513 "num_queues": 4, 00:13:54.513 "bdev_name": "Malloc0" 00:13:54.513 }, 00:13:54.513 { 00:13:54.513 "ublk_device": "/dev/ublkb1", 00:13:54.513 "id": 1, 00:13:54.513 "queue_depth": 512, 00:13:54.513 "num_queues": 4, 00:13:54.513 "bdev_name": "Malloc1" 00:13:54.513 }, 00:13:54.513 { 00:13:54.513 "ublk_device": "/dev/ublkb2", 00:13:54.513 "id": 2, 00:13:54.513 "queue_depth": 512, 00:13:54.513 "num_queues": 4, 00:13:54.513 "bdev_name": "Malloc2" 00:13:54.513 }, 00:13:54.513 { 00:13:54.513 "ublk_device": "/dev/ublkb3", 00:13:54.513 "id": 3, 00:13:54.513 "queue_depth": 512, 00:13:54.513 "num_queues": 4, 00:13:54.513 "bdev_name": "Malloc3" 00:13:54.513 } 00:13:54.513 ]' 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:54.513 03:13:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:54.513 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:54.513 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:54.513 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:54.513 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.513 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:54.772 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.031 [2024-11-18 03:13:58.540412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.031 [2024-11-18 03:13:58.577791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.031 [2024-11-18 03:13:58.578764] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.031 [2024-11-18 03:13:58.584334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.031 [2024-11-18 03:13:58.584556] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:55.031 [2024-11-18 03:13:58.584562] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.031 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.031 [2024-11-18 03:13:58.597395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.290 [2024-11-18 03:13:58.636678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.290 [2024-11-18 03:13:58.637780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.290 [2024-11-18 03:13:58.643337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.290 [2024-11-18 03:13:58.643553] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:55.290 [2024-11-18 03:13:58.643559] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.290 [2024-11-18 03:13:58.658413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.290 [2024-11-18 03:13:58.688783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.290 [2024-11-18 03:13:58.689710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.290 [2024-11-18 03:13:58.699334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.290 [2024-11-18 03:13:58.699554] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:55.290 [2024-11-18 03:13:58.699560] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.290 [2024-11-18 03:13:58.715379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.290 [2024-11-18 03:13:58.747774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.290 [2024-11-18 03:13:58.748645] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.290 [2024-11-18 03:13:58.759346] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.290 [2024-11-18 03:13:58.759560] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:55.290 [2024-11-18 03:13:58.759565] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.290 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:55.549 [2024-11-18 03:13:58.950408] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:55.549 [2024-11-18 03:13:58.951242] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:55.549 [2024-11-18 03:13:58.951274] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:55.549 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:55.549 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.549 03:13:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:55.549 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.549 03:13:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.549 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:55.809 ************************************ 00:13:55.809 END TEST test_create_multi_ublk 00:13:55.809 ************************************ 00:13:55.809 03:13:59 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:55.809 00:13:55.809 real 0m2.008s 00:13:55.809 user 0m0.832s 00:13:55.809 sys 0m0.135s 00:13:55.810 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.810 03:13:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.810 03:13:59 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:55.810 03:13:59 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:55.810 03:13:59 ublk -- ublk/ublk.sh@130 -- # killprocess 82842 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@950 -- # '[' -z 82842 ']' 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@954 -- # kill -0 82842 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@955 -- # uname 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82842 00:13:55.810 killing process with pid 82842 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82842' 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@969 -- # kill 82842 00:13:55.810 03:13:59 ublk -- common/autotest_common.sh@974 -- # wait 82842 00:13:56.068 [2024-11-18 03:13:59.483737] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:56.068 [2024-11-18 03:13:59.483795] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:56.327 00:13:56.327 real 0m17.749s 00:13:56.327 user 0m27.793s 00:13:56.327 sys 0m7.877s 00:13:56.327 03:13:59 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.327 03:13:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.327 ************************************ 00:13:56.327 END TEST ublk 00:13:56.327 ************************************ 00:13:56.327 03:13:59 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:56.327 03:13:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.327 03:13:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.327 03:13:59 -- common/autotest_common.sh@10 -- # set +x 00:13:56.327 ************************************ 00:13:56.327 START TEST ublk_recovery 00:13:56.327 ************************************ 00:13:56.327 03:13:59 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:56.327 * Looking for test storage... 00:13:56.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:56.327 03:13:59 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:56.327 03:13:59 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:56.327 03:13:59 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:56.586 03:13:59 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.586 03:13:59 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:56.586 03:13:59 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.586 03:13:59 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:56.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.586 --rc genhtml_branch_coverage=1 00:13:56.586 --rc genhtml_function_coverage=1 00:13:56.586 --rc genhtml_legend=1 00:13:56.586 --rc geninfo_all_blocks=1 00:13:56.586 --rc geninfo_unexecuted_blocks=1 00:13:56.586 00:13:56.586 ' 00:13:56.586 03:13:59 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:56.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.586 --rc genhtml_branch_coverage=1 00:13:56.586 --rc genhtml_function_coverage=1 00:13:56.586 --rc genhtml_legend=1 00:13:56.586 --rc geninfo_all_blocks=1 00:13:56.586 --rc geninfo_unexecuted_blocks=1 00:13:56.586 00:13:56.587 ' 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:56.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.587 --rc genhtml_branch_coverage=1 00:13:56.587 --rc genhtml_function_coverage=1 00:13:56.587 --rc genhtml_legend=1 00:13:56.587 --rc geninfo_all_blocks=1 00:13:56.587 --rc geninfo_unexecuted_blocks=1 00:13:56.587 00:13:56.587 ' 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:56.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.587 --rc genhtml_branch_coverage=1 00:13:56.587 --rc genhtml_function_coverage=1 00:13:56.587 --rc genhtml_legend=1 00:13:56.587 --rc geninfo_all_blocks=1 00:13:56.587 --rc geninfo_unexecuted_blocks=1 00:13:56.587 00:13:56.587 ' 00:13:56.587 03:13:59 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:56.587 03:13:59 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:56.587 03:13:59 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:56.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.587 03:13:59 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83208 00:13:56.587 03:13:59 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:56.587 03:13:59 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83208 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83208 ']' 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.587 03:13:59 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.587 03:13:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.587 [2024-11-18 03:14:00.001653] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:56.587 [2024-11-18 03:14:00.001753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83208 ] 00:13:56.587 [2024-11-18 03:14:00.143570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.846 [2024-11-18 03:14:00.176128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.846 [2024-11-18 03:14:00.176176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:57.413 03:14:00 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.413 [2024-11-18 03:14:00.859335] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:57.413 [2024-11-18 03:14:00.860424] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.413 03:14:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.413 malloc0 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.413 03:14:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.413 [2024-11-18 03:14:00.893446] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:57.413 [2024-11-18 03:14:00.893551] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:57.413 [2024-11-18 03:14:00.893559] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:57.413 [2024-11-18 03:14:00.893567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:57.413 [2024-11-18 03:14:00.899367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:57.413 [2024-11-18 03:14:00.899393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:57.413 [2024-11-18 03:14:00.903333] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:57.413 [2024-11-18 03:14:00.903476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:57.413 [2024-11-18 03:14:00.910399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:57.413 1 00:13:57.413 03:14:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.413 03:14:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:58.348 03:14:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83241 00:13:58.348 03:14:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:58.348 03:14:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:58.605 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.605 fio-3.35 00:13:58.605 Starting 1 process 00:14:03.871 03:14:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83208 00:14:03.871 03:14:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:09.208 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83208 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:09.208 03:14:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83352 00:14:09.208 03:14:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.208 03:14:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.208 03:14:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83352 00:14:09.208 03:14:11 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83352 ']' 00:14:09.208 03:14:11 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.208 03:14:11 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.208 03:14:11 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.208 03:14:11 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.208 03:14:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.208 [2024-11-18 03:14:12.005804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:09.208 [2024-11-18 03:14:12.006133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83352 ] 00:14:09.208 [2024-11-18 03:14:12.154550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.208 [2024-11-18 03:14:12.187300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.208 [2024-11-18 03:14:12.187257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:09.466 03:14:12 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.466 [2024-11-18 03:14:12.845331] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:09.466 [2024-11-18 03:14:12.846442] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.466 03:14:12 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.466 malloc0 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.466 03:14:12 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.466 [2024-11-18 03:14:12.877464] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:09.466 [2024-11-18 03:14:12.877505] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:09.466 [2024-11-18 03:14:12.877513] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:09.466 [2024-11-18 03:14:12.885370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:09.466 [2024-11-18 03:14:12.885391] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:09.466 1 00:14:09.466 03:14:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.466 03:14:12 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83241 00:14:10.401 [2024-11-18 03:14:13.885441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:10.401 [2024-11-18 03:14:13.893343] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:10.402 [2024-11-18 03:14:13.893370] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:11.337 [2024-11-18 03:14:14.893399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:11.337 [2024-11-18 03:14:14.897341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:11.337 [2024-11-18 03:14:14.897353] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:12.713 [2024-11-18 03:14:15.897379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:12.713 [2024-11-18 03:14:15.905331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:12.713 [2024-11-18 03:14:15.905409] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:12.713 [2024-11-18 03:14:15.905430] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:12.713 [2024-11-18 03:14:15.905546] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:34.640 [2024-11-18 03:14:37.094335] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:34.640 [2024-11-18 03:14:37.100936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:34.640 [2024-11-18 03:14:37.108508] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:34.640 [2024-11-18 03:14:37.108530] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:01.178 00:15:01.178 fio_test: (groupid=0, jobs=1): err= 0: pid=83244: Mon Nov 18 03:15:02 2024 00:15:01.178 read: IOPS=16.0k, BW=62.6MiB/s (65.6MB/s)(3755MiB/60002msec) 00:15:01.178 slat (nsec): min=1117, max=656257, avg=4816.90, stdev=1615.84 00:15:01.178 clat (usec): min=666, max=30193k, avg=4331.05, stdev=268427.67 00:15:01.178 lat (usec): min=671, max=30193k, avg=4335.87, stdev=268427.67 00:15:01.178 clat percentiles (usec): 00:15:01.178 | 1.00th=[ 1582], 5.00th=[ 1696], 10.00th=[ 1729], 20.00th=[ 1745], 00:15:01.178 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1795], 60.00th=[ 1811], 00:15:01.178 | 70.00th=[ 1811], 80.00th=[ 1844], 90.00th=[ 1893], 95.00th=[ 3097], 00:15:01.178 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7504], 99.95th=[12387], 00:15:01.178 | 99.99th=[12911] 00:15:01.178 bw ( KiB/s): min= 5672, max=137552, per=100.00%, avg=126170.43, stdev=23782.21, samples=60 00:15:01.178 iops : min= 1418, max=34388, avg=31542.60, stdev=5945.55, samples=60 00:15:01.178 write: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(3750MiB/60002msec); 0 zone resets 00:15:01.178 slat (nsec): min=1067, max=388825, avg=4852.32, stdev=1459.12 00:15:01.178 clat (usec): min=611, max=30193k, avg=3652.39, stdev=222180.38 00:15:01.178 lat (usec): min=615, max=30193k, avg=3657.24, stdev=222180.38 00:15:01.178 clat percentiles (usec): 00:15:01.178 | 1.00th=[ 1614], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1827], 00:15:01.178 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1876], 60.00th=[ 1893], 00:15:01.178 | 70.00th=[ 1909], 80.00th=[ 1926], 90.00th=[ 1975], 95.00th=[ 3032], 00:15:01.178 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7504], 99.95th=[12387], 00:15:01.179 | 99.99th=[12911] 00:15:01.179 bw ( KiB/s): min= 5352, max=137768, per=100.00%, avg=126014.65, stdev=23851.68, samples=60 00:15:01.179 iops : min= 1338, max=34442, avg=31503.65, stdev=5962.92, samples=60 00:15:01.179 lat (usec) : 750=0.01%, 1000=0.01% 00:15:01.179 lat (msec) : 2=91.55%, 4=5.34%, 10=3.04%, 20=0.06%, >=2000=0.01% 00:15:01.179 cpu : usr=3.58%, sys=15.87%, ctx=65283, majf=0, minf=13 00:15:01.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.179 issued rwts: total=961288,960061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.179 00:15:01.179 Run status group 0 (all jobs): 00:15:01.179 READ: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=3755MiB (3937MB), run=60002-60002msec 00:15:01.179 WRITE: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=3750MiB (3932MB), run=60002-60002msec 00:15:01.179 00:15:01.179 Disk stats (read/write): 00:15:01.179 ublkb1: ios=957622/956455, merge=0/0, ticks=4108564/3375982, in_queue=7484546, util=99.92% 00:15:01.179 03:15:02 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.179 [2024-11-18 03:15:02.176634] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:01.179 [2024-11-18 03:15:02.207428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:01.179 [2024-11-18 03:15:02.207633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:01.179 [2024-11-18 03:15:02.214338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:01.179 [2024-11-18 03:15:02.214413] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:01.179 [2024-11-18 03:15:02.214424] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.179 03:15:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.179 [2024-11-18 03:15:02.229395] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.179 [2024-11-18 03:15:02.230345] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:01.179 [2024-11-18 03:15:02.230374] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.179 03:15:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:01.179 03:15:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:01.179 03:15:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83352 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83352 ']' 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83352 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83352 00:15:01.179 killing process with pid 83352 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83352' 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83352 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83352 00:15:01.179 [2024-11-18 03:15:02.430486] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.179 [2024-11-18 03:15:02.430538] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:01.179 ************************************ 00:15:01.179 END TEST ublk_recovery 00:15:01.179 ************************************ 00:15:01.179 00:15:01.179 real 1m2.935s 00:15:01.179 user 1m44.288s 00:15:01.179 sys 0m22.865s 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.179 03:15:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.179 03:15:02 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:01.179 03:15:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.179 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:15:01.179 03:15:02 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:01.179 03:15:02 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:01.179 03:15:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:01.179 03:15:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.179 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:15:01.179 ************************************ 00:15:01.179 START TEST ftl 00:15:01.179 ************************************ 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:01.179 * Looking for test storage... 00:15:01.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.179 03:15:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.179 03:15:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.179 03:15:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.179 03:15:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.179 03:15:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.179 03:15:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:01.179 03:15:02 ftl -- scripts/common.sh@345 -- # : 1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.179 03:15:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.179 03:15:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@353 -- # local d=1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.179 03:15:02 ftl -- scripts/common.sh@355 -- # echo 1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.179 03:15:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@353 -- # local d=2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.179 03:15:02 ftl -- scripts/common.sh@355 -- # echo 2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.179 03:15:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.179 03:15:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.179 03:15:02 ftl -- scripts/common.sh@368 -- # return 0 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:01.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.179 --rc genhtml_branch_coverage=1 00:15:01.179 --rc genhtml_function_coverage=1 00:15:01.179 --rc genhtml_legend=1 00:15:01.179 --rc geninfo_all_blocks=1 00:15:01.179 --rc geninfo_unexecuted_blocks=1 00:15:01.179 00:15:01.179 ' 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:01.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.179 --rc genhtml_branch_coverage=1 00:15:01.179 --rc genhtml_function_coverage=1 00:15:01.179 --rc genhtml_legend=1 00:15:01.179 --rc geninfo_all_blocks=1 00:15:01.179 --rc geninfo_unexecuted_blocks=1 00:15:01.179 00:15:01.179 ' 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:01.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.179 --rc genhtml_branch_coverage=1 00:15:01.179 --rc genhtml_function_coverage=1 00:15:01.179 --rc genhtml_legend=1 00:15:01.179 --rc geninfo_all_blocks=1 00:15:01.179 --rc geninfo_unexecuted_blocks=1 00:15:01.179 00:15:01.179 ' 00:15:01.179 03:15:02 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:01.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.179 --rc genhtml_branch_coverage=1 00:15:01.179 --rc genhtml_function_coverage=1 00:15:01.179 --rc genhtml_legend=1 00:15:01.179 --rc geninfo_all_blocks=1 00:15:01.179 --rc geninfo_unexecuted_blocks=1 00:15:01.179 00:15:01.179 ' 00:15:01.179 03:15:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:01.179 03:15:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:01.179 03:15:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:01.179 03:15:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:01.179 03:15:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:01.179 03:15:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:01.180 03:15:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.180 03:15:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:01.180 03:15:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:01.180 03:15:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.180 03:15:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.180 03:15:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:01.180 03:15:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:01.180 03:15:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:01.180 03:15:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:01.180 03:15:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:01.180 03:15:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:01.180 03:15:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.180 03:15:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.180 03:15:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:01.180 03:15:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:01.180 03:15:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:01.180 03:15:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:01.180 03:15:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:01.180 03:15:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:01.180 03:15:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:01.180 03:15:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:01.180 03:15:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:01.180 03:15:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:01.180 03:15:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.180 03:15:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:01.180 03:15:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:01.180 03:15:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:01.180 03:15:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:01.180 03:15:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:01.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:01.180 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.180 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.180 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.180 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.180 03:15:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84145 00:15:01.180 03:15:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:01.180 03:15:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84145 00:15:01.180 03:15:03 ftl -- common/autotest_common.sh@831 -- # '[' -z 84145 ']' 00:15:01.180 03:15:03 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.180 03:15:03 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.180 03:15:03 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.180 03:15:03 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.180 03:15:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 [2024-11-18 03:15:03.443667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:01.180 [2024-11-18 03:15:03.443921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84145 ] 00:15:01.180 [2024-11-18 03:15:03.586509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.180 [2024-11-18 03:15:03.618647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.180 03:15:04 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.180 03:15:04 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:01.180 03:15:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:01.180 03:15:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:01.438 03:15:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:01.438 03:15:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:01.696 03:15:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:01.696 03:15:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:01.696 03:15:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@50 -- # break 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:01.955 03:15:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:02.213 03:15:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:02.213 03:15:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:02.213 03:15:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:02.213 03:15:05 ftl -- ftl/ftl.sh@63 -- # break 00:15:02.213 03:15:05 ftl -- ftl/ftl.sh@66 -- # killprocess 84145 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@950 -- # '[' -z 84145 ']' 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@954 -- # kill -0 84145 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@955 -- # uname 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84145 00:15:02.213 killing process with pid 84145 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84145' 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@969 -- # kill 84145 00:15:02.213 03:15:05 ftl -- common/autotest_common.sh@974 -- # wait 84145 00:15:02.472 03:15:05 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:02.472 03:15:05 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:02.472 03:15:05 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:02.472 03:15:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.472 03:15:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:02.472 ************************************ 00:15:02.472 START TEST ftl_fio_basic 00:15:02.472 ************************************ 00:15:02.472 03:15:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:02.472 * Looking for test storage... 00:15:02.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:02.472 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:02.472 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:02.472 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:02.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.731 --rc genhtml_branch_coverage=1 00:15:02.731 --rc genhtml_function_coverage=1 00:15:02.731 --rc genhtml_legend=1 00:15:02.731 --rc geninfo_all_blocks=1 00:15:02.731 --rc geninfo_unexecuted_blocks=1 00:15:02.731 00:15:02.731 ' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:02.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.731 --rc genhtml_branch_coverage=1 00:15:02.731 --rc genhtml_function_coverage=1 00:15:02.731 --rc genhtml_legend=1 00:15:02.731 --rc geninfo_all_blocks=1 00:15:02.731 --rc geninfo_unexecuted_blocks=1 00:15:02.731 00:15:02.731 ' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:02.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.731 --rc genhtml_branch_coverage=1 00:15:02.731 --rc genhtml_function_coverage=1 00:15:02.731 --rc genhtml_legend=1 00:15:02.731 --rc geninfo_all_blocks=1 00:15:02.731 --rc geninfo_unexecuted_blocks=1 00:15:02.731 00:15:02.731 ' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:02.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.731 --rc genhtml_branch_coverage=1 00:15:02.731 --rc genhtml_function_coverage=1 00:15:02.731 --rc genhtml_legend=1 00:15:02.731 --rc geninfo_all_blocks=1 00:15:02.731 --rc geninfo_unexecuted_blocks=1 00:15:02.731 00:15:02.731 ' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:02.731 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84261 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84261 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84261 ']' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:02.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.732 03:15:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:02.732 [2024-11-18 03:15:06.170541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:02.732 [2024-11-18 03:15:06.170792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84261 ] 00:15:02.990 [2024-11-18 03:15:06.312119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.990 [2024-11-18 03:15:06.345482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.990 [2024-11-18 03:15:06.345691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.990 [2024-11-18 03:15:06.345790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:03.557 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:03.817 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:04.075 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:04.075 { 00:15:04.075 "name": "nvme0n1", 00:15:04.075 "aliases": [ 00:15:04.075 "e8d97c98-7d7a-4f35-a570-c770174294c0" 00:15:04.075 ], 00:15:04.075 "product_name": "NVMe disk", 00:15:04.075 "block_size": 4096, 00:15:04.075 "num_blocks": 1310720, 00:15:04.075 "uuid": "e8d97c98-7d7a-4f35-a570-c770174294c0", 00:15:04.075 "numa_id": -1, 00:15:04.075 "assigned_rate_limits": { 00:15:04.076 "rw_ios_per_sec": 0, 00:15:04.076 "rw_mbytes_per_sec": 0, 00:15:04.076 "r_mbytes_per_sec": 0, 00:15:04.076 "w_mbytes_per_sec": 0 00:15:04.076 }, 00:15:04.076 "claimed": false, 00:15:04.076 "zoned": false, 00:15:04.076 "supported_io_types": { 00:15:04.076 "read": true, 00:15:04.076 "write": true, 00:15:04.076 "unmap": true, 00:15:04.076 "flush": true, 00:15:04.076 "reset": true, 00:15:04.076 "nvme_admin": true, 00:15:04.076 "nvme_io": true, 00:15:04.076 "nvme_io_md": false, 00:15:04.076 "write_zeroes": true, 00:15:04.076 "zcopy": false, 00:15:04.076 "get_zone_info": false, 00:15:04.076 "zone_management": false, 00:15:04.076 "zone_append": false, 00:15:04.076 "compare": true, 00:15:04.076 "compare_and_write": false, 00:15:04.076 "abort": true, 00:15:04.076 "seek_hole": false, 00:15:04.076 "seek_data": false, 00:15:04.076 "copy": true, 00:15:04.076 "nvme_iov_md": false 00:15:04.076 }, 00:15:04.076 "driver_specific": { 00:15:04.076 "nvme": [ 00:15:04.076 { 00:15:04.076 "pci_address": "0000:00:11.0", 00:15:04.076 "trid": { 00:15:04.076 "trtype": "PCIe", 00:15:04.076 "traddr": "0000:00:11.0" 00:15:04.076 }, 00:15:04.076 "ctrlr_data": { 00:15:04.076 "cntlid": 0, 00:15:04.076 "vendor_id": "0x1b36", 00:15:04.076 "model_number": "QEMU NVMe Ctrl", 00:15:04.076 "serial_number": "12341", 00:15:04.076 "firmware_revision": "8.0.0", 00:15:04.076 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:04.076 "oacs": { 00:15:04.076 "security": 0, 00:15:04.076 "format": 1, 00:15:04.076 "firmware": 0, 00:15:04.076 "ns_manage": 1 00:15:04.076 }, 00:15:04.076 "multi_ctrlr": false, 00:15:04.076 "ana_reporting": false 00:15:04.076 }, 00:15:04.076 "vs": { 00:15:04.076 "nvme_version": "1.4" 00:15:04.076 }, 00:15:04.076 "ns_data": { 00:15:04.076 "id": 1, 00:15:04.076 "can_share": false 00:15:04.076 } 00:15:04.076 } 00:15:04.076 ], 00:15:04.076 "mp_policy": "active_passive" 00:15:04.076 } 00:15:04.076 } 00:15:04.076 ]' 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:04.076 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:04.335 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:04.335 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:04.595 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=2b484585-d472-4012-982c-7d90c844c2ae 00:15:04.595 03:15:07 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2b484585-d472-4012-982c-7d90c844c2ae 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:04.595 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:04.853 { 00:15:04.853 "name": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:04.853 "aliases": [ 00:15:04.853 "lvs/nvme0n1p0" 00:15:04.853 ], 00:15:04.853 "product_name": "Logical Volume", 00:15:04.853 "block_size": 4096, 00:15:04.853 "num_blocks": 26476544, 00:15:04.853 "uuid": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:04.853 "assigned_rate_limits": { 00:15:04.853 "rw_ios_per_sec": 0, 00:15:04.853 "rw_mbytes_per_sec": 0, 00:15:04.853 "r_mbytes_per_sec": 0, 00:15:04.853 "w_mbytes_per_sec": 0 00:15:04.853 }, 00:15:04.853 "claimed": false, 00:15:04.853 "zoned": false, 00:15:04.853 "supported_io_types": { 00:15:04.853 "read": true, 00:15:04.853 "write": true, 00:15:04.853 "unmap": true, 00:15:04.853 "flush": false, 00:15:04.853 "reset": true, 00:15:04.853 "nvme_admin": false, 00:15:04.853 "nvme_io": false, 00:15:04.853 "nvme_io_md": false, 00:15:04.853 "write_zeroes": true, 00:15:04.853 "zcopy": false, 00:15:04.853 "get_zone_info": false, 00:15:04.853 "zone_management": false, 00:15:04.853 "zone_append": false, 00:15:04.853 "compare": false, 00:15:04.853 "compare_and_write": false, 00:15:04.853 "abort": false, 00:15:04.853 "seek_hole": true, 00:15:04.853 "seek_data": true, 00:15:04.853 "copy": false, 00:15:04.853 "nvme_iov_md": false 00:15:04.853 }, 00:15:04.853 "driver_specific": { 00:15:04.853 "lvol": { 00:15:04.853 "lvol_store_uuid": "2b484585-d472-4012-982c-7d90c844c2ae", 00:15:04.853 "base_bdev": "nvme0n1", 00:15:04.853 "thin_provision": true, 00:15:04.853 "num_allocated_clusters": 0, 00:15:04.853 "snapshot": false, 00:15:04.853 "clone": false, 00:15:04.853 "esnap_clone": false 00:15:04.853 } 00:15:04.853 } 00:15:04.853 } 00:15:04.853 ]' 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:04.853 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:05.111 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:05.370 { 00:15:05.370 "name": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:05.370 "aliases": [ 00:15:05.370 "lvs/nvme0n1p0" 00:15:05.370 ], 00:15:05.370 "product_name": "Logical Volume", 00:15:05.370 "block_size": 4096, 00:15:05.370 "num_blocks": 26476544, 00:15:05.370 "uuid": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:05.370 "assigned_rate_limits": { 00:15:05.370 "rw_ios_per_sec": 0, 00:15:05.370 "rw_mbytes_per_sec": 0, 00:15:05.370 "r_mbytes_per_sec": 0, 00:15:05.370 "w_mbytes_per_sec": 0 00:15:05.370 }, 00:15:05.370 "claimed": false, 00:15:05.370 "zoned": false, 00:15:05.370 "supported_io_types": { 00:15:05.370 "read": true, 00:15:05.370 "write": true, 00:15:05.370 "unmap": true, 00:15:05.370 "flush": false, 00:15:05.370 "reset": true, 00:15:05.370 "nvme_admin": false, 00:15:05.370 "nvme_io": false, 00:15:05.370 "nvme_io_md": false, 00:15:05.370 "write_zeroes": true, 00:15:05.370 "zcopy": false, 00:15:05.370 "get_zone_info": false, 00:15:05.370 "zone_management": false, 00:15:05.370 "zone_append": false, 00:15:05.370 "compare": false, 00:15:05.370 "compare_and_write": false, 00:15:05.370 "abort": false, 00:15:05.370 "seek_hole": true, 00:15:05.370 "seek_data": true, 00:15:05.370 "copy": false, 00:15:05.370 "nvme_iov_md": false 00:15:05.370 }, 00:15:05.370 "driver_specific": { 00:15:05.370 "lvol": { 00:15:05.370 "lvol_store_uuid": "2b484585-d472-4012-982c-7d90c844c2ae", 00:15:05.370 "base_bdev": "nvme0n1", 00:15:05.370 "thin_provision": true, 00:15:05.370 "num_allocated_clusters": 0, 00:15:05.370 "snapshot": false, 00:15:05.370 "clone": false, 00:15:05.370 "esnap_clone": false 00:15:05.370 } 00:15:05.370 } 00:15:05.370 } 00:15:05.370 ]' 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:05.370 03:15:08 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:05.628 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:05.628 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:05.628 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:05.628 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:05.629 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:05.629 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:05.629 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:05.629 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:05.629 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:05.629 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdf76f0c-a537-4b15-9747-e1bacf86a92f 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:05.887 { 00:15:05.887 "name": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:05.887 "aliases": [ 00:15:05.887 "lvs/nvme0n1p0" 00:15:05.887 ], 00:15:05.887 "product_name": "Logical Volume", 00:15:05.887 "block_size": 4096, 00:15:05.887 "num_blocks": 26476544, 00:15:05.887 "uuid": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:05.887 "assigned_rate_limits": { 00:15:05.887 "rw_ios_per_sec": 0, 00:15:05.887 "rw_mbytes_per_sec": 0, 00:15:05.887 "r_mbytes_per_sec": 0, 00:15:05.887 "w_mbytes_per_sec": 0 00:15:05.887 }, 00:15:05.887 "claimed": false, 00:15:05.887 "zoned": false, 00:15:05.887 "supported_io_types": { 00:15:05.887 "read": true, 00:15:05.887 "write": true, 00:15:05.887 "unmap": true, 00:15:05.887 "flush": false, 00:15:05.887 "reset": true, 00:15:05.887 "nvme_admin": false, 00:15:05.887 "nvme_io": false, 00:15:05.887 "nvme_io_md": false, 00:15:05.887 "write_zeroes": true, 00:15:05.887 "zcopy": false, 00:15:05.887 "get_zone_info": false, 00:15:05.887 "zone_management": false, 00:15:05.887 "zone_append": false, 00:15:05.887 "compare": false, 00:15:05.887 "compare_and_write": false, 00:15:05.887 "abort": false, 00:15:05.887 "seek_hole": true, 00:15:05.887 "seek_data": true, 00:15:05.887 "copy": false, 00:15:05.887 "nvme_iov_md": false 00:15:05.887 }, 00:15:05.887 "driver_specific": { 00:15:05.887 "lvol": { 00:15:05.887 "lvol_store_uuid": "2b484585-d472-4012-982c-7d90c844c2ae", 00:15:05.887 "base_bdev": "nvme0n1", 00:15:05.887 "thin_provision": true, 00:15:05.887 "num_allocated_clusters": 0, 00:15:05.887 "snapshot": false, 00:15:05.887 "clone": false, 00:15:05.887 "esnap_clone": false 00:15:05.887 } 00:15:05.887 } 00:15:05.887 } 00:15:05.887 ]' 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:05.887 03:15:09 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fdf76f0c-a537-4b15-9747-e1bacf86a92f -c nvc0n1p0 --l2p_dram_limit 60 00:15:06.147 [2024-11-18 03:15:09.560931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.560980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:06.147 [2024-11-18 03:15:09.560994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:06.147 [2024-11-18 03:15:09.561013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.561088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.561101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:06.147 [2024-11-18 03:15:09.561112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:06.147 [2024-11-18 03:15:09.561123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.561169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:06.147 [2024-11-18 03:15:09.561447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:06.147 [2024-11-18 03:15:09.561473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.561490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:06.147 [2024-11-18 03:15:09.561507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:15:06.147 [2024-11-18 03:15:09.561531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.561666] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cadee8c3-51f6-4cf5-918a-01d55b5dcf8b 00:15:06.147 [2024-11-18 03:15:09.562704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.562834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:06.147 [2024-11-18 03:15:09.562867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:06.147 [2024-11-18 03:15:09.562875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.567770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.567799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:06.147 [2024-11-18 03:15:09.567810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:15:06.147 [2024-11-18 03:15:09.567817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.567922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.567935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:06.147 [2024-11-18 03:15:09.567944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:15:06.147 [2024-11-18 03:15:09.567952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.568001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.568027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:06.147 [2024-11-18 03:15:09.568037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:06.147 [2024-11-18 03:15:09.568044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.568071] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:06.147 [2024-11-18 03:15:09.569459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.569568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:06.147 [2024-11-18 03:15:09.569582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:15:06.147 [2024-11-18 03:15:09.569590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.569635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.569645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:06.147 [2024-11-18 03:15:09.569653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:06.147 [2024-11-18 03:15:09.569663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.569684] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:06.147 [2024-11-18 03:15:09.569835] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:06.147 [2024-11-18 03:15:09.569847] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:06.147 [2024-11-18 03:15:09.569859] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:06.147 [2024-11-18 03:15:09.569868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:06.147 [2024-11-18 03:15:09.569879] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:06.147 [2024-11-18 03:15:09.569887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:06.147 [2024-11-18 03:15:09.569899] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:06.147 [2024-11-18 03:15:09.569906] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:06.147 [2024-11-18 03:15:09.569915] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:06.147 [2024-11-18 03:15:09.569923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.147 [2024-11-18 03:15:09.569932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:06.147 [2024-11-18 03:15:09.569939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:15:06.147 [2024-11-18 03:15:09.569947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.147 [2024-11-18 03:15:09.570036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.148 [2024-11-18 03:15:09.570047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:06.148 [2024-11-18 03:15:09.570054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:06.148 [2024-11-18 03:15:09.570062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.148 [2024-11-18 03:15:09.570203] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:06.148 [2024-11-18 03:15:09.570216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:06.148 [2024-11-18 03:15:09.570225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:06.148 [2024-11-18 03:15:09.570253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:06.148 [2024-11-18 03:15:09.570277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:06.148 [2024-11-18 03:15:09.570294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:06.148 [2024-11-18 03:15:09.570305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:06.148 [2024-11-18 03:15:09.570324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:06.148 [2024-11-18 03:15:09.570336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:06.148 [2024-11-18 03:15:09.570343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:06.148 [2024-11-18 03:15:09.570353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:06.148 [2024-11-18 03:15:09.570370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:06.148 [2024-11-18 03:15:09.570405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:06.148 [2024-11-18 03:15:09.570431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:06.148 [2024-11-18 03:15:09.570459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:06.148 [2024-11-18 03:15:09.570486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:06.148 [2024-11-18 03:15:09.570510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:06.148 [2024-11-18 03:15:09.570527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:06.148 [2024-11-18 03:15:09.570537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:06.148 [2024-11-18 03:15:09.570544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:06.148 [2024-11-18 03:15:09.570553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:06.148 [2024-11-18 03:15:09.570561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:06.148 [2024-11-18 03:15:09.570569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:06.148 [2024-11-18 03:15:09.570585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:06.148 [2024-11-18 03:15:09.570591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570599] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:06.148 [2024-11-18 03:15:09.570606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:06.148 [2024-11-18 03:15:09.570616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.148 [2024-11-18 03:15:09.570641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:06.148 [2024-11-18 03:15:09.570647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:06.148 [2024-11-18 03:15:09.570655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:06.148 [2024-11-18 03:15:09.570662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:06.148 [2024-11-18 03:15:09.570669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:06.148 [2024-11-18 03:15:09.570676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:06.148 [2024-11-18 03:15:09.570687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:06.148 [2024-11-18 03:15:09.570697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:06.148 [2024-11-18 03:15:09.570714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:06.148 [2024-11-18 03:15:09.570724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:06.148 [2024-11-18 03:15:09.570732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:06.148 [2024-11-18 03:15:09.570741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:06.148 [2024-11-18 03:15:09.570749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:06.148 [2024-11-18 03:15:09.570758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:06.148 [2024-11-18 03:15:09.570765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:06.148 [2024-11-18 03:15:09.570775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:06.148 [2024-11-18 03:15:09.570781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:06.148 [2024-11-18 03:15:09.570820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:06.148 [2024-11-18 03:15:09.570828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:06.148 [2024-11-18 03:15:09.570844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:06.148 [2024-11-18 03:15:09.570853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:06.148 [2024-11-18 03:15:09.570860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:06.148 [2024-11-18 03:15:09.570868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.148 [2024-11-18 03:15:09.570875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:06.148 [2024-11-18 03:15:09.570886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:15:06.148 [2024-11-18 03:15:09.570892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.148 [2024-11-18 03:15:09.570962] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:06.148 [2024-11-18 03:15:09.570977] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:08.681 [2024-11-18 03:15:11.700636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.700804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:08.681 [2024-11-18 03:15:11.700846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2129.658 ms 00:15:08.681 [2024-11-18 03:15:11.700861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.717766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.717839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:08.681 [2024-11-18 03:15:11.717868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.811 ms 00:15:08.681 [2024-11-18 03:15:11.717884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.718090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.718120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:08.681 [2024-11-18 03:15:11.718149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:15:08.681 [2024-11-18 03:15:11.718200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.730993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.731030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:08.681 [2024-11-18 03:15:11.731043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.609 ms 00:15:08.681 [2024-11-18 03:15:11.731051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.731087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.731095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:08.681 [2024-11-18 03:15:11.731105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:08.681 [2024-11-18 03:15:11.731112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.731499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.731519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:08.681 [2024-11-18 03:15:11.731535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:15:08.681 [2024-11-18 03:15:11.731546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.731714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.731742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:08.681 [2024-11-18 03:15:11.731757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:15:08.681 [2024-11-18 03:15:11.731769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.737088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.737120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:08.681 [2024-11-18 03:15:11.737132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.282 ms 00:15:08.681 [2024-11-18 03:15:11.737140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.745474] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:08.681 [2024-11-18 03:15:11.759285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.759338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:08.681 [2024-11-18 03:15:11.759349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.078 ms 00:15:08.681 [2024-11-18 03:15:11.759358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.794059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.794107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:08.681 [2024-11-18 03:15:11.794122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.649 ms 00:15:08.681 [2024-11-18 03:15:11.794133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.794335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.794349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:08.681 [2024-11-18 03:15:11.794376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:15:08.681 [2024-11-18 03:15:11.794385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.797099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.797138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:08.681 [2024-11-18 03:15:11.797158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:15:08.681 [2024-11-18 03:15:11.797171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.799489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.799525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:08.681 [2024-11-18 03:15:11.799536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:15:08.681 [2024-11-18 03:15:11.799546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.799839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.799855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:08.681 [2024-11-18 03:15:11.799863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:15:08.681 [2024-11-18 03:15:11.799873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.819147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.819183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:08.681 [2024-11-18 03:15:11.819196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.240 ms 00:15:08.681 [2024-11-18 03:15:11.819205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.822663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.822698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:08.681 [2024-11-18 03:15:11.822709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:15:08.681 [2024-11-18 03:15:11.822719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.825400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.825433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:08.681 [2024-11-18 03:15:11.825442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:15:08.681 [2024-11-18 03:15:11.825451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.828205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.828242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:08.681 [2024-11-18 03:15:11.828250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:15:08.681 [2024-11-18 03:15:11.828261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.828300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.828327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:08.681 [2024-11-18 03:15:11.828336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:08.681 [2024-11-18 03:15:11.828345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.828422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:08.681 [2024-11-18 03:15:11.828443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:08.681 [2024-11-18 03:15:11.828451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:08.681 [2024-11-18 03:15:11.828462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:08.681 [2024-11-18 03:15:11.829458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2268.097 ms, result 0 00:15:08.681 { 00:15:08.681 "name": "ftl0", 00:15:08.681 "uuid": "cadee8c3-51f6-4cf5-918a-01d55b5dcf8b" 00:15:08.681 } 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:08.681 03:15:11 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:08.681 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:08.681 [ 00:15:08.681 { 00:15:08.681 "name": "ftl0", 00:15:08.681 "aliases": [ 00:15:08.681 "cadee8c3-51f6-4cf5-918a-01d55b5dcf8b" 00:15:08.681 ], 00:15:08.681 "product_name": "FTL disk", 00:15:08.681 "block_size": 4096, 00:15:08.681 "num_blocks": 20971520, 00:15:08.681 "uuid": "cadee8c3-51f6-4cf5-918a-01d55b5dcf8b", 00:15:08.681 "assigned_rate_limits": { 00:15:08.681 "rw_ios_per_sec": 0, 00:15:08.681 "rw_mbytes_per_sec": 0, 00:15:08.681 "r_mbytes_per_sec": 0, 00:15:08.681 "w_mbytes_per_sec": 0 00:15:08.681 }, 00:15:08.682 "claimed": false, 00:15:08.682 "zoned": false, 00:15:08.682 "supported_io_types": { 00:15:08.682 "read": true, 00:15:08.682 "write": true, 00:15:08.682 "unmap": true, 00:15:08.682 "flush": true, 00:15:08.682 "reset": false, 00:15:08.682 "nvme_admin": false, 00:15:08.682 "nvme_io": false, 00:15:08.682 "nvme_io_md": false, 00:15:08.682 "write_zeroes": true, 00:15:08.682 "zcopy": false, 00:15:08.682 "get_zone_info": false, 00:15:08.682 "zone_management": false, 00:15:08.682 "zone_append": false, 00:15:08.682 "compare": false, 00:15:08.682 "compare_and_write": false, 00:15:08.682 "abort": false, 00:15:08.682 "seek_hole": false, 00:15:08.682 "seek_data": false, 00:15:08.682 "copy": false, 00:15:08.682 "nvme_iov_md": false 00:15:08.682 }, 00:15:08.682 "driver_specific": { 00:15:08.682 "ftl": { 00:15:08.682 "base_bdev": "fdf76f0c-a537-4b15-9747-e1bacf86a92f", 00:15:08.682 "cache": "nvc0n1p0" 00:15:08.682 } 00:15:08.682 } 00:15:08.682 } 00:15:08.682 ] 00:15:08.682 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:08.682 03:15:12 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:08.682 03:15:12 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:08.941 03:15:12 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:08.941 03:15:12 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:09.199 [2024-11-18 03:15:12.626030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.199 [2024-11-18 03:15:12.626072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:09.199 [2024-11-18 03:15:12.626087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:09.199 [2024-11-18 03:15:12.626095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.199 [2024-11-18 03:15:12.626124] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:09.199 [2024-11-18 03:15:12.626617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.199 [2024-11-18 03:15:12.626658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:09.199 [2024-11-18 03:15:12.626687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:15:09.199 [2024-11-18 03:15:12.626700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.199 [2024-11-18 03:15:12.627161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.199 [2024-11-18 03:15:12.627192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:09.199 [2024-11-18 03:15:12.627202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:15:09.199 [2024-11-18 03:15:12.627211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.199 [2024-11-18 03:15:12.630506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.199 [2024-11-18 03:15:12.630545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:09.199 [2024-11-18 03:15:12.630554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:15:09.199 [2024-11-18 03:15:12.630563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.636750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.636879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:09.200 [2024-11-18 03:15:12.636899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:15:09.200 [2024-11-18 03:15:12.636913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.638594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.638633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:09.200 [2024-11-18 03:15:12.638643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:15:09.200 [2024-11-18 03:15:12.638652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.641711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.641750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:09.200 [2024-11-18 03:15:12.641761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:15:09.200 [2024-11-18 03:15:12.641772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.641915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.641940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:09.200 [2024-11-18 03:15:12.641948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:15:09.200 [2024-11-18 03:15:12.641957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.643146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.643185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:09.200 [2024-11-18 03:15:12.643194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:15:09.200 [2024-11-18 03:15:12.643203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.644146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.644277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:09.200 [2024-11-18 03:15:12.644297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:15:09.200 [2024-11-18 03:15:12.644321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.645234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.645272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:09.200 [2024-11-18 03:15:12.645286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:15:09.200 [2024-11-18 03:15:12.645299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.646133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.200 [2024-11-18 03:15:12.646171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:09.200 [2024-11-18 03:15:12.646194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:15:09.200 [2024-11-18 03:15:12.646204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.200 [2024-11-18 03:15:12.646237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:09.200 [2024-11-18 03:15:12.646254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:09.200 [2024-11-18 03:15:12.646880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.646991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:09.201 [2024-11-18 03:15:12.647403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:09.201 [2024-11-18 03:15:12.647417] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cadee8c3-51f6-4cf5-918a-01d55b5dcf8b 00:15:09.201 [2024-11-18 03:15:12.647432] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:09.201 [2024-11-18 03:15:12.647439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:09.201 [2024-11-18 03:15:12.647453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:09.201 [2024-11-18 03:15:12.647478] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:09.201 [2024-11-18 03:15:12.647491] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:09.201 [2024-11-18 03:15:12.647502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:09.201 [2024-11-18 03:15:12.647516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:09.201 [2024-11-18 03:15:12.647526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:09.201 [2024-11-18 03:15:12.647539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:09.201 [2024-11-18 03:15:12.647551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.201 [2024-11-18 03:15:12.647565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:09.201 [2024-11-18 03:15:12.647573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:15:09.201 [2024-11-18 03:15:12.647582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.649097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.201 [2024-11-18 03:15:12.649123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:09.201 [2024-11-18 03:15:12.649140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:15:09.201 [2024-11-18 03:15:12.649150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.649244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.201 [2024-11-18 03:15:12.649255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:09.201 [2024-11-18 03:15:12.649263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:09.201 [2024-11-18 03:15:12.649272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.654768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.654900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:09.201 [2024-11-18 03:15:12.654921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.654948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.655013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.655025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:09.201 [2024-11-18 03:15:12.655035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.655043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.655109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.655126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:09.201 [2024-11-18 03:15:12.655143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.655151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.655172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.655181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:09.201 [2024-11-18 03:15:12.655188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.655197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.664249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.664291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:09.201 [2024-11-18 03:15:12.664300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.664328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.671732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.671772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:09.201 [2024-11-18 03:15:12.671782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.671791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.671877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.671890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:09.201 [2024-11-18 03:15:12.671900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.671910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.671960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.671971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:09.201 [2024-11-18 03:15:12.671979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.671988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.201 [2024-11-18 03:15:12.672066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.201 [2024-11-18 03:15:12.672077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:09.201 [2024-11-18 03:15:12.672085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.201 [2024-11-18 03:15:12.672095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.202 [2024-11-18 03:15:12.672136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.202 [2024-11-18 03:15:12.672147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:09.202 [2024-11-18 03:15:12.672155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.202 [2024-11-18 03:15:12.672164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.202 [2024-11-18 03:15:12.672201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.202 [2024-11-18 03:15:12.672212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:09.202 [2024-11-18 03:15:12.672220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.202 [2024-11-18 03:15:12.672232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.202 [2024-11-18 03:15:12.672283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.202 [2024-11-18 03:15:12.672294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:09.202 [2024-11-18 03:15:12.672302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.202 [2024-11-18 03:15:12.672334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.202 [2024-11-18 03:15:12.672494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.433 ms, result 0 00:15:09.202 true 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84261 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84261 ']' 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84261 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84261 00:15:09.202 killing process with pid 84261 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84261' 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84261 00:15:09.202 03:15:12 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84261 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.861 03:15:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:15.861 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:15.861 fio-3.35 00:15:15.861 Starting 1 thread 00:15:19.155 00:15:19.155 test: (groupid=0, jobs=1): err= 0: pid=84423: Mon Nov 18 03:15:22 2024 00:15:19.155 read: IOPS=1399, BW=93.0MiB/s (97.5MB/s)(255MiB/2738msec) 00:15:19.155 slat (nsec): min=2885, max=30381, avg=4242.69, stdev=2060.02 00:15:19.155 clat (usec): min=237, max=811, avg=323.29, stdev=43.68 00:15:19.155 lat (usec): min=241, max=822, avg=327.54, stdev=44.58 00:15:19.155 clat percentiles (usec): 00:15:19.155 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:15:19.155 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:15:19.155 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 416], 00:15:19.155 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 742], 99.95th=[ 758], 00:15:19.155 | 99.99th=[ 816] 00:15:19.155 write: IOPS=1409, BW=93.6MiB/s (98.2MB/s)(256MiB/2735msec); 0 zone resets 00:15:19.155 slat (usec): min=13, max=106, avg=18.62, stdev= 4.54 00:15:19.155 clat (usec): min=262, max=981, avg=352.15, stdev=60.09 00:15:19.155 lat (usec): min=278, max=1010, avg=370.77, stdev=61.25 00:15:19.155 clat percentiles (usec): 00:15:19.155 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 310], 20.00th=[ 318], 00:15:19.155 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:15:19.155 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 420], 00:15:19.155 | 99.00th=[ 676], 99.50th=[ 725], 99.90th=[ 922], 99.95th=[ 955], 00:15:19.155 | 99.99th=[ 979] 00:15:19.155 bw ( KiB/s): min=92072, max=101456, per=99.01%, avg=94918.00, stdev=4084.88, samples=5 00:15:19.155 iops : min= 1354, max= 1492, avg=1395.80, stdev=60.11, samples=5 00:15:19.155 lat (usec) : 250=0.08%, 500=97.98%, 750=1.72%, 1000=0.22% 00:15:19.155 cpu : usr=99.20%, sys=0.15%, ctx=5, majf=0, minf=1181 00:15:19.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:19.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.155 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:19.155 00:15:19.155 Run status group 0 (all jobs): 00:15:19.155 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=255MiB (267MB), run=2738-2738msec 00:15:19.155 WRITE: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=256MiB (269MB), run=2735-2735msec 00:15:19.414 ----------------------------------------------------- 00:15:19.414 Suppressions used: 00:15:19.414 count bytes template 00:15:19.414 1 5 /usr/src/fio/parse.c 00:15:19.414 1 8 libtcmalloc_minimal.so 00:15:19.414 1 904 libcrypto.so 00:15:19.414 ----------------------------------------------------- 00:15:19.414 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:19.414 03:15:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:19.671 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:19.671 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:19.671 fio-3.35 00:15:19.671 Starting 2 threads 00:15:41.591 00:15:41.591 first_half: (groupid=0, jobs=1): err= 0: pid=84498: Mon Nov 18 03:15:44 2024 00:15:41.591 read: IOPS=3179, BW=12.4MiB/s (13.0MB/s)(256MiB/20593msec) 00:15:41.591 slat (nsec): min=2972, max=54376, avg=3635.54, stdev=681.35 00:15:41.591 clat (msec): min=7, max=229, avg=35.06, stdev=18.40 00:15:41.591 lat (msec): min=7, max=229, avg=35.06, stdev=18.40 00:15:41.591 clat percentiles (msec): 00:15:41.591 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:15:41.591 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:41.591 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 61], 00:15:41.591 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 165], 00:15:41.591 | 99.99th=[ 222] 00:15:41.591 write: IOPS=3200, BW=12.5MiB/s (13.1MB/s)(256MiB/20480msec); 0 zone resets 00:15:41.591 slat (usec): min=3, max=265, avg= 4.96, stdev= 2.39 00:15:41.591 clat (usec): min=361, max=29046, avg=5170.61, stdev=3257.92 00:15:41.591 lat (usec): min=371, max=29050, avg=5175.57, stdev=3258.02 00:15:41.591 clat percentiles (usec): 00:15:41.591 | 1.00th=[ 791], 5.00th=[ 1778], 10.00th=[ 2212], 20.00th=[ 2769], 00:15:41.591 | 30.00th=[ 3392], 40.00th=[ 4146], 50.00th=[ 4752], 60.00th=[ 5211], 00:15:41.591 | 70.00th=[ 5407], 80.00th=[ 6063], 90.00th=[ 9634], 95.00th=[10552], 00:15:41.591 | 99.00th=[18482], 99.50th=[23200], 99.90th=[28181], 99.95th=[28443], 00:15:41.591 | 99.99th=[28705] 00:15:41.591 bw ( KiB/s): min= 664, max=43280, per=100.00%, avg=29127.11, stdev=14943.61, samples=18 00:15:41.591 iops : min= 166, max=10820, avg=7281.78, stdev=3735.90, samples=18 00:15:41.591 lat (usec) : 500=0.04%, 750=0.28%, 1000=0.76% 00:15:41.591 lat (msec) : 2=2.46%, 4=15.37%, 10=27.33%, 20=3.58%, 50=47.14% 00:15:41.591 lat (msec) : 100=1.63%, 250=1.41% 00:15:41.591 cpu : usr=99.45%, sys=0.12%, ctx=50, majf=0, minf=5609 00:15:41.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:41.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.591 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.591 issued rwts: total=65481,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.591 second_half: (groupid=0, jobs=1): err= 0: pid=84499: Mon Nov 18 03:15:44 2024 00:15:41.591 read: IOPS=3158, BW=12.3MiB/s (12.9MB/s)(256MiB/20733msec) 00:15:41.591 slat (nsec): min=3008, max=56056, avg=3706.19, stdev=658.31 00:15:41.591 clat (msec): min=6, max=251, avg=34.67, stdev=20.76 00:15:41.591 lat (msec): min=6, max=251, avg=34.67, stdev=20.76 00:15:41.591 clat percentiles (msec): 00:15:41.591 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 29], 00:15:41.591 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:15:41.591 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 67], 00:15:41.591 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 174], 99.95th=[ 218], 00:15:41.591 | 99.99th=[ 251] 00:15:41.591 write: IOPS=3305, BW=12.9MiB/s (13.5MB/s)(256MiB/19827msec); 0 zone resets 00:15:41.591 slat (usec): min=3, max=1087, avg= 5.14, stdev= 4.80 00:15:41.591 clat (usec): min=361, max=50016, avg=5837.44, stdev=6202.17 00:15:41.591 lat (usec): min=368, max=50021, avg=5842.58, stdev=6202.30 00:15:41.591 clat percentiles (usec): 00:15:41.591 | 1.00th=[ 685], 5.00th=[ 848], 10.00th=[ 1074], 20.00th=[ 2311], 00:15:41.591 | 30.00th=[ 3032], 40.00th=[ 3687], 50.00th=[ 4424], 60.00th=[ 5014], 00:15:41.591 | 70.00th=[ 5473], 80.00th=[ 6587], 90.00th=[10552], 95.00th=[22152], 00:15:41.591 | 99.00th=[31327], 99.50th=[32375], 99.90th=[45876], 99.95th=[48497], 00:15:41.591 | 99.99th=[49546] 00:15:41.591 bw ( KiB/s): min= 2056, max=49384, per=97.52%, avg=24966.10, stdev=15013.17, samples=21 00:15:41.591 iops : min= 514, max=12346, avg=6241.52, stdev=3753.29, samples=21 00:15:41.591 lat (usec) : 500=0.05%, 750=1.13%, 1000=3.03% 00:15:41.591 lat (msec) : 2=4.51%, 4=13.26%, 10=23.43%, 20=3.76%, 50=47.53% 00:15:41.591 lat (msec) : 100=1.74%, 250=1.55%, 500=0.01% 00:15:41.591 cpu : usr=99.28%, sys=0.12%, ctx=39, majf=0, minf=5529 00:15:41.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:41.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.591 issued rwts: total=65482,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.591 00:15:41.591 Run status group 0 (all jobs): 00:15:41.591 READ: bw=24.7MiB/s (25.9MB/s), 12.3MiB/s-12.4MiB/s (12.9MB/s-13.0MB/s), io=512MiB (536MB), run=20593-20733msec 00:15:41.591 WRITE: bw=25.0MiB/s (26.2MB/s), 12.5MiB/s-12.9MiB/s (13.1MB/s-13.5MB/s), io=512MiB (537MB), run=19827-20480msec 00:15:41.849 ----------------------------------------------------- 00:15:41.849 Suppressions used: 00:15:41.849 count bytes template 00:15:41.849 2 10 /usr/src/fio/parse.c 00:15:41.849 4 384 /usr/src/fio/iolog.c 00:15:41.849 1 8 libtcmalloc_minimal.so 00:15:41.849 1 904 libcrypto.so 00:15:41.849 ----------------------------------------------------- 00:15:41.849 00:15:41.849 03:15:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:41.849 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.849 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.108 03:15:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:42.108 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:42.108 fio-3.35 00:15:42.108 Starting 1 thread 00:15:57.037 00:15:57.037 test: (groupid=0, jobs=1): err= 0: pid=84774: Mon Nov 18 03:15:57 2024 00:15:57.037 read: IOPS=8286, BW=32.4MiB/s (33.9MB/s)(255MiB/7868msec) 00:15:57.037 slat (nsec): min=3006, max=29475, avg=3473.12, stdev=716.51 00:15:57.037 clat (usec): min=929, max=30484, avg=15438.13, stdev=1478.53 00:15:57.037 lat (usec): min=932, max=30487, avg=15441.61, stdev=1478.52 00:15:57.037 clat percentiles (usec): 00:15:57.037 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14746], 20.00th=[14877], 00:15:57.037 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:15:57.037 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15795], 95.00th=[17433], 00:15:57.037 | 99.00th=[23200], 99.50th=[23987], 99.90th=[27395], 99.95th=[28967], 00:15:57.037 | 99.99th=[29754] 00:15:57.037 write: IOPS=17.2k, BW=67.0MiB/s (70.3MB/s)(256MiB/3821msec); 0 zone resets 00:15:57.037 slat (usec): min=4, max=156, avg= 5.78, stdev= 2.04 00:15:57.037 clat (usec): min=449, max=46357, avg=7419.06, stdev=9160.79 00:15:57.037 lat (usec): min=454, max=46363, avg=7424.84, stdev=9160.74 00:15:57.037 clat percentiles (usec): 00:15:57.037 | 1.00th=[ 594], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 824], 00:15:57.037 | 30.00th=[ 988], 40.00th=[ 1303], 50.00th=[ 5145], 60.00th=[ 5866], 00:15:57.037 | 70.00th=[ 6849], 80.00th=[ 8029], 90.00th=[26608], 95.00th=[28181], 00:15:57.037 | 99.00th=[30540], 99.50th=[33162], 99.90th=[39060], 99.95th=[39584], 00:15:57.037 | 99.99th=[45351] 00:15:57.037 bw ( KiB/s): min=36040, max=86696, per=95.53%, avg=65536.00, stdev=15367.33, samples=8 00:15:57.037 iops : min= 9010, max=21674, avg=16384.00, stdev=3841.83, samples=8 00:15:57.037 lat (usec) : 500=0.01%, 750=7.05%, 1000=8.43% 00:15:57.037 lat (msec) : 2=5.13%, 4=0.56%, 10=20.79%, 20=48.68%, 50=9.33% 00:15:57.037 cpu : usr=99.18%, sys=0.17%, ctx=23, majf=0, minf=5577 00:15:57.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:57.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.037 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.037 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.037 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.037 00:15:57.037 Run status group 0 (all jobs): 00:15:57.037 READ: bw=32.4MiB/s (33.9MB/s), 32.4MiB/s-32.4MiB/s (33.9MB/s-33.9MB/s), io=255MiB (267MB), run=7868-7868msec 00:15:57.037 WRITE: bw=67.0MiB/s (70.3MB/s), 67.0MiB/s-67.0MiB/s (70.3MB/s-70.3MB/s), io=256MiB (268MB), run=3821-3821msec 00:15:57.037 ----------------------------------------------------- 00:15:57.037 Suppressions used: 00:15:57.037 count bytes template 00:15:57.037 1 5 /usr/src/fio/parse.c 00:15:57.037 2 192 /usr/src/fio/iolog.c 00:15:57.037 1 8 libtcmalloc_minimal.so 00:15:57.037 1 904 libcrypto.so 00:15:57.037 ----------------------------------------------------- 00:15:57.037 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.037 Remove shared memory files 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69750 /dev/shm/spdk_tgt_trace.pid83208 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:57.037 ************************************ 00:15:57.037 END TEST ftl_fio_basic 00:15:57.037 ************************************ 00:15:57.037 00:15:57.037 real 0m52.875s 00:15:57.037 user 2m0.834s 00:15:57.037 sys 0m2.324s 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.037 03:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.037 03:15:58 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:57.037 03:15:58 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:57.037 03:15:58 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.037 03:15:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:57.037 ************************************ 00:15:57.038 START TEST ftl_bdevperf 00:15:57.038 ************************************ 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:57.038 * Looking for test storage... 00:15:57.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.038 --rc genhtml_branch_coverage=1 00:15:57.038 --rc genhtml_function_coverage=1 00:15:57.038 --rc genhtml_legend=1 00:15:57.038 --rc geninfo_all_blocks=1 00:15:57.038 --rc geninfo_unexecuted_blocks=1 00:15:57.038 00:15:57.038 ' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.038 --rc genhtml_branch_coverage=1 00:15:57.038 --rc genhtml_function_coverage=1 00:15:57.038 --rc genhtml_legend=1 00:15:57.038 --rc geninfo_all_blocks=1 00:15:57.038 --rc geninfo_unexecuted_blocks=1 00:15:57.038 00:15:57.038 ' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.038 --rc genhtml_branch_coverage=1 00:15:57.038 --rc genhtml_function_coverage=1 00:15:57.038 --rc genhtml_legend=1 00:15:57.038 --rc geninfo_all_blocks=1 00:15:57.038 --rc geninfo_unexecuted_blocks=1 00:15:57.038 00:15:57.038 ' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.038 --rc genhtml_branch_coverage=1 00:15:57.038 --rc genhtml_function_coverage=1 00:15:57.038 --rc genhtml_legend=1 00:15:57.038 --rc geninfo_all_blocks=1 00:15:57.038 --rc geninfo_unexecuted_blocks=1 00:15:57.038 00:15:57.038 ' 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.038 03:15:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84984 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84984 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84984 ']' 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:57.038 [2024-11-18 03:15:59.070104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:57.038 [2024-11-18 03:15:59.070469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84984 ] 00:15:57.038 [2024-11-18 03:15:59.216155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.038 [2024-11-18 03:15:59.248015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:57.038 03:15:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:57.038 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:57.039 { 00:15:57.039 "name": "nvme0n1", 00:15:57.039 "aliases": [ 00:15:57.039 "c8c39ef3-2bf7-4473-92f6-e1c9061732ae" 00:15:57.039 ], 00:15:57.039 "product_name": "NVMe disk", 00:15:57.039 "block_size": 4096, 00:15:57.039 "num_blocks": 1310720, 00:15:57.039 "uuid": "c8c39ef3-2bf7-4473-92f6-e1c9061732ae", 00:15:57.039 "numa_id": -1, 00:15:57.039 "assigned_rate_limits": { 00:15:57.039 "rw_ios_per_sec": 0, 00:15:57.039 "rw_mbytes_per_sec": 0, 00:15:57.039 "r_mbytes_per_sec": 0, 00:15:57.039 "w_mbytes_per_sec": 0 00:15:57.039 }, 00:15:57.039 "claimed": true, 00:15:57.039 "claim_type": "read_many_write_one", 00:15:57.039 "zoned": false, 00:15:57.039 "supported_io_types": { 00:15:57.039 "read": true, 00:15:57.039 "write": true, 00:15:57.039 "unmap": true, 00:15:57.039 "flush": true, 00:15:57.039 "reset": true, 00:15:57.039 "nvme_admin": true, 00:15:57.039 "nvme_io": true, 00:15:57.039 "nvme_io_md": false, 00:15:57.039 "write_zeroes": true, 00:15:57.039 "zcopy": false, 00:15:57.039 "get_zone_info": false, 00:15:57.039 "zone_management": false, 00:15:57.039 "zone_append": false, 00:15:57.039 "compare": true, 00:15:57.039 "compare_and_write": false, 00:15:57.039 "abort": true, 00:15:57.039 "seek_hole": false, 00:15:57.039 "seek_data": false, 00:15:57.039 "copy": true, 00:15:57.039 "nvme_iov_md": false 00:15:57.039 }, 00:15:57.039 "driver_specific": { 00:15:57.039 "nvme": [ 00:15:57.039 { 00:15:57.039 "pci_address": "0000:00:11.0", 00:15:57.039 "trid": { 00:15:57.039 "trtype": "PCIe", 00:15:57.039 "traddr": "0000:00:11.0" 00:15:57.039 }, 00:15:57.039 "ctrlr_data": { 00:15:57.039 "cntlid": 0, 00:15:57.039 "vendor_id": "0x1b36", 00:15:57.039 "model_number": "QEMU NVMe Ctrl", 00:15:57.039 "serial_number": "12341", 00:15:57.039 "firmware_revision": "8.0.0", 00:15:57.039 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:57.039 "oacs": { 00:15:57.039 "security": 0, 00:15:57.039 "format": 1, 00:15:57.039 "firmware": 0, 00:15:57.039 "ns_manage": 1 00:15:57.039 }, 00:15:57.039 "multi_ctrlr": false, 00:15:57.039 "ana_reporting": false 00:15:57.039 }, 00:15:57.039 "vs": { 00:15:57.039 "nvme_version": "1.4" 00:15:57.039 }, 00:15:57.039 "ns_data": { 00:15:57.039 "id": 1, 00:15:57.039 "can_share": false 00:15:57.039 } 00:15:57.039 } 00:15:57.039 ], 00:15:57.039 "mp_policy": "active_passive" 00:15:57.039 } 00:15:57.039 } 00:15:57.039 ]' 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:57.039 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:57.298 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=2b484585-d472-4012-982c-7d90c844c2ae 00:15:57.298 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:57.298 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b484585-d472-4012-982c-7d90c844c2ae 00:15:57.556 03:16:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:57.556 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=198bab9b-d36b-41aa-827e-47fbea5d2356 00:15:57.556 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 198bab9b-d36b-41aa-827e-47fbea5d2356 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:57.815 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:58.074 { 00:15:58.074 "name": "08f3758b-e04d-4169-82f7-0d82d754c8ea", 00:15:58.074 "aliases": [ 00:15:58.074 "lvs/nvme0n1p0" 00:15:58.074 ], 00:15:58.074 "product_name": "Logical Volume", 00:15:58.074 "block_size": 4096, 00:15:58.074 "num_blocks": 26476544, 00:15:58.074 "uuid": "08f3758b-e04d-4169-82f7-0d82d754c8ea", 00:15:58.074 "assigned_rate_limits": { 00:15:58.074 "rw_ios_per_sec": 0, 00:15:58.074 "rw_mbytes_per_sec": 0, 00:15:58.074 "r_mbytes_per_sec": 0, 00:15:58.074 "w_mbytes_per_sec": 0 00:15:58.074 }, 00:15:58.074 "claimed": false, 00:15:58.074 "zoned": false, 00:15:58.074 "supported_io_types": { 00:15:58.074 "read": true, 00:15:58.074 "write": true, 00:15:58.074 "unmap": true, 00:15:58.074 "flush": false, 00:15:58.074 "reset": true, 00:15:58.074 "nvme_admin": false, 00:15:58.074 "nvme_io": false, 00:15:58.074 "nvme_io_md": false, 00:15:58.074 "write_zeroes": true, 00:15:58.074 "zcopy": false, 00:15:58.074 "get_zone_info": false, 00:15:58.074 "zone_management": false, 00:15:58.074 "zone_append": false, 00:15:58.074 "compare": false, 00:15:58.074 "compare_and_write": false, 00:15:58.074 "abort": false, 00:15:58.074 "seek_hole": true, 00:15:58.074 "seek_data": true, 00:15:58.074 "copy": false, 00:15:58.074 "nvme_iov_md": false 00:15:58.074 }, 00:15:58.074 "driver_specific": { 00:15:58.074 "lvol": { 00:15:58.074 "lvol_store_uuid": "198bab9b-d36b-41aa-827e-47fbea5d2356", 00:15:58.074 "base_bdev": "nvme0n1", 00:15:58.074 "thin_provision": true, 00:15:58.074 "num_allocated_clusters": 0, 00:15:58.074 "snapshot": false, 00:15:58.074 "clone": false, 00:15:58.074 "esnap_clone": false 00:15:58.074 } 00:15:58.074 } 00:15:58.074 } 00:15:58.074 ]' 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:58.074 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:58.332 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:58.591 03:16:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:58.591 { 00:15:58.591 "name": "08f3758b-e04d-4169-82f7-0d82d754c8ea", 00:15:58.591 "aliases": [ 00:15:58.591 "lvs/nvme0n1p0" 00:15:58.591 ], 00:15:58.591 "product_name": "Logical Volume", 00:15:58.591 "block_size": 4096, 00:15:58.591 "num_blocks": 26476544, 00:15:58.591 "uuid": "08f3758b-e04d-4169-82f7-0d82d754c8ea", 00:15:58.591 "assigned_rate_limits": { 00:15:58.591 "rw_ios_per_sec": 0, 00:15:58.591 "rw_mbytes_per_sec": 0, 00:15:58.591 "r_mbytes_per_sec": 0, 00:15:58.591 "w_mbytes_per_sec": 0 00:15:58.591 }, 00:15:58.591 "claimed": false, 00:15:58.591 "zoned": false, 00:15:58.591 "supported_io_types": { 00:15:58.591 "read": true, 00:15:58.591 "write": true, 00:15:58.591 "unmap": true, 00:15:58.591 "flush": false, 00:15:58.591 "reset": true, 00:15:58.591 "nvme_admin": false, 00:15:58.591 "nvme_io": false, 00:15:58.591 "nvme_io_md": false, 00:15:58.591 "write_zeroes": true, 00:15:58.591 "zcopy": false, 00:15:58.591 "get_zone_info": false, 00:15:58.591 "zone_management": false, 00:15:58.591 "zone_append": false, 00:15:58.591 "compare": false, 00:15:58.591 "compare_and_write": false, 00:15:58.591 "abort": false, 00:15:58.591 "seek_hole": true, 00:15:58.591 "seek_data": true, 00:15:58.591 "copy": false, 00:15:58.591 "nvme_iov_md": false 00:15:58.591 }, 00:15:58.591 "driver_specific": { 00:15:58.591 "lvol": { 00:15:58.591 "lvol_store_uuid": "198bab9b-d36b-41aa-827e-47fbea5d2356", 00:15:58.591 "base_bdev": "nvme0n1", 00:15:58.591 "thin_provision": true, 00:15:58.591 "num_allocated_clusters": 0, 00:15:58.591 "snapshot": false, 00:15:58.591 "clone": false, 00:15:58.591 "esnap_clone": false 00:15:58.591 } 00:15:58.591 } 00:15:58.591 } 00:15:58.591 ]' 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:58.591 03:16:02 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:58.849 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08f3758b-e04d-4169-82f7-0d82d754c8ea 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:59.108 { 00:15:59.108 "name": "08f3758b-e04d-4169-82f7-0d82d754c8ea", 00:15:59.108 "aliases": [ 00:15:59.108 "lvs/nvme0n1p0" 00:15:59.108 ], 00:15:59.108 "product_name": "Logical Volume", 00:15:59.108 "block_size": 4096, 00:15:59.108 "num_blocks": 26476544, 00:15:59.108 "uuid": "08f3758b-e04d-4169-82f7-0d82d754c8ea", 00:15:59.108 "assigned_rate_limits": { 00:15:59.108 "rw_ios_per_sec": 0, 00:15:59.108 "rw_mbytes_per_sec": 0, 00:15:59.108 "r_mbytes_per_sec": 0, 00:15:59.108 "w_mbytes_per_sec": 0 00:15:59.108 }, 00:15:59.108 "claimed": false, 00:15:59.108 "zoned": false, 00:15:59.108 "supported_io_types": { 00:15:59.108 "read": true, 00:15:59.108 "write": true, 00:15:59.108 "unmap": true, 00:15:59.108 "flush": false, 00:15:59.108 "reset": true, 00:15:59.108 "nvme_admin": false, 00:15:59.108 "nvme_io": false, 00:15:59.108 "nvme_io_md": false, 00:15:59.108 "write_zeroes": true, 00:15:59.108 "zcopy": false, 00:15:59.108 "get_zone_info": false, 00:15:59.108 "zone_management": false, 00:15:59.108 "zone_append": false, 00:15:59.108 "compare": false, 00:15:59.108 "compare_and_write": false, 00:15:59.108 "abort": false, 00:15:59.108 "seek_hole": true, 00:15:59.108 "seek_data": true, 00:15:59.108 "copy": false, 00:15:59.108 "nvme_iov_md": false 00:15:59.108 }, 00:15:59.108 "driver_specific": { 00:15:59.108 "lvol": { 00:15:59.108 "lvol_store_uuid": "198bab9b-d36b-41aa-827e-47fbea5d2356", 00:15:59.108 "base_bdev": "nvme0n1", 00:15:59.108 "thin_provision": true, 00:15:59.108 "num_allocated_clusters": 0, 00:15:59.108 "snapshot": false, 00:15:59.108 "clone": false, 00:15:59.108 "esnap_clone": false 00:15:59.108 } 00:15:59.108 } 00:15:59.108 } 00:15:59.108 ]' 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:59.108 03:16:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 08f3758b-e04d-4169-82f7-0d82d754c8ea -c nvc0n1p0 --l2p_dram_limit 20 00:15:59.368 [2024-11-18 03:16:02.708425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.708466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:59.368 [2024-11-18 03:16:02.708478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:59.368 [2024-11-18 03:16:02.708489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.708534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.708542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:59.368 [2024-11-18 03:16:02.708552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:59.368 [2024-11-18 03:16:02.708560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.708574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:59.368 [2024-11-18 03:16:02.708783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:59.368 [2024-11-18 03:16:02.708796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.708802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:59.368 [2024-11-18 03:16:02.708811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:15:59.368 [2024-11-18 03:16:02.708817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.708842] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 950ef73d-6f06-4339-bc52-4d64d5442866 00:15:59.368 [2024-11-18 03:16:02.709908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.709932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:59.368 [2024-11-18 03:16:02.709940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:59.368 [2024-11-18 03:16:02.709951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.714698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.714728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:59.368 [2024-11-18 03:16:02.714736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:15:59.368 [2024-11-18 03:16:02.714745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.714801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.714809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:59.368 [2024-11-18 03:16:02.714816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:15:59.368 [2024-11-18 03:16:02.714822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.714856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.714869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:59.368 [2024-11-18 03:16:02.714876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:59.368 [2024-11-18 03:16:02.714883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.714897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:59.368 [2024-11-18 03:16:02.716142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.716162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:59.368 [2024-11-18 03:16:02.716174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:15:59.368 [2024-11-18 03:16:02.716180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.716207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.368 [2024-11-18 03:16:02.716213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:59.368 [2024-11-18 03:16:02.716222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:59.368 [2024-11-18 03:16:02.716231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.368 [2024-11-18 03:16:02.716243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:59.368 [2024-11-18 03:16:02.716359] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:59.369 [2024-11-18 03:16:02.716370] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:59.369 [2024-11-18 03:16:02.716380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:59.369 [2024-11-18 03:16:02.716391] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716398] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716405] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:59.369 [2024-11-18 03:16:02.716411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:59.369 [2024-11-18 03:16:02.716417] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:59.369 [2024-11-18 03:16:02.716423] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:59.369 [2024-11-18 03:16:02.716430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.369 [2024-11-18 03:16:02.716436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:59.369 [2024-11-18 03:16:02.716445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:15:59.369 [2024-11-18 03:16:02.716450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.369 [2024-11-18 03:16:02.716515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.369 [2024-11-18 03:16:02.716520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:59.369 [2024-11-18 03:16:02.716530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:59.369 [2024-11-18 03:16:02.716536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.369 [2024-11-18 03:16:02.716606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:59.369 [2024-11-18 03:16:02.716751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:59.369 [2024-11-18 03:16:02.716764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:59.369 [2024-11-18 03:16:02.716786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:59.369 [2024-11-18 03:16:02.716805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:59.369 [2024-11-18 03:16:02.716819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:59.369 [2024-11-18 03:16:02.716824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:59.369 [2024-11-18 03:16:02.716832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:59.369 [2024-11-18 03:16:02.716837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:59.369 [2024-11-18 03:16:02.716844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:59.369 [2024-11-18 03:16:02.716849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:59.369 [2024-11-18 03:16:02.716860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:59.369 [2024-11-18 03:16:02.716879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:59.369 [2024-11-18 03:16:02.716895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:59.369 [2024-11-18 03:16:02.716913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:59.369 [2024-11-18 03:16:02.716930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.369 [2024-11-18 03:16:02.716943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:59.369 [2024-11-18 03:16:02.716949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:59.369 [2024-11-18 03:16:02.716960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:59.369 [2024-11-18 03:16:02.716965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:59.369 [2024-11-18 03:16:02.716971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:59.369 [2024-11-18 03:16:02.716976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:59.369 [2024-11-18 03:16:02.716983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:59.369 [2024-11-18 03:16:02.716988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.716994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:59.369 [2024-11-18 03:16:02.717000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:59.369 [2024-11-18 03:16:02.717007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.717012] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:59.369 [2024-11-18 03:16:02.717021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:59.369 [2024-11-18 03:16:02.717026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:59.369 [2024-11-18 03:16:02.717033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.369 [2024-11-18 03:16:02.717039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:59.369 [2024-11-18 03:16:02.717045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:59.369 [2024-11-18 03:16:02.717050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:59.369 [2024-11-18 03:16:02.717056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:59.369 [2024-11-18 03:16:02.717061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:59.369 [2024-11-18 03:16:02.717068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:59.369 [2024-11-18 03:16:02.717076] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:59.369 [2024-11-18 03:16:02.717084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:59.369 [2024-11-18 03:16:02.717091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:59.369 [2024-11-18 03:16:02.717099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:59.369 [2024-11-18 03:16:02.717105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:59.369 [2024-11-18 03:16:02.717112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:59.369 [2024-11-18 03:16:02.717117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:59.369 [2024-11-18 03:16:02.717125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:59.369 [2024-11-18 03:16:02.717130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:59.369 [2024-11-18 03:16:02.717137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:59.369 [2024-11-18 03:16:02.717142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:59.369 [2024-11-18 03:16:02.717149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:59.369 [2024-11-18 03:16:02.717154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:59.370 [2024-11-18 03:16:02.717162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:59.370 [2024-11-18 03:16:02.717167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:59.370 [2024-11-18 03:16:02.717174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:59.370 [2024-11-18 03:16:02.717180] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:59.370 [2024-11-18 03:16:02.717187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:59.370 [2024-11-18 03:16:02.717196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:59.370 [2024-11-18 03:16:02.717202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:59.370 [2024-11-18 03:16:02.717208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:59.370 [2024-11-18 03:16:02.717215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:59.370 [2024-11-18 03:16:02.717222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.370 [2024-11-18 03:16:02.717236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:59.370 [2024-11-18 03:16:02.717242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:15:59.370 [2024-11-18 03:16:02.717249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.370 [2024-11-18 03:16:02.717275] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:59.370 [2024-11-18 03:16:02.717284] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:01.900 [2024-11-18 03:16:04.921413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.900 [2024-11-18 03:16:04.921644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:01.901 [2024-11-18 03:16:04.921670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2204.125 ms 00:16:01.901 [2024-11-18 03:16:04.921681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.936825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.936878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:01.901 [2024-11-18 03:16:04.936893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.066 ms 00:16:01.901 [2024-11-18 03:16:04.936909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.937017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.937029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:01.901 [2024-11-18 03:16:04.937041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:01.901 [2024-11-18 03:16:04.937050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.945888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.946086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:01.901 [2024-11-18 03:16:04.946108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.793 ms 00:16:01.901 [2024-11-18 03:16:04.946121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.946155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.946169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:01.901 [2024-11-18 03:16:04.946180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:01.901 [2024-11-18 03:16:04.946192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.946577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.946615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:01.901 [2024-11-18 03:16:04.946631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:16:01.901 [2024-11-18 03:16:04.946646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.946788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.946802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:01.901 [2024-11-18 03:16:04.946817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:16:01.901 [2024-11-18 03:16:04.946830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.951680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.951718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:01.901 [2024-11-18 03:16:04.951726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:16:01.901 [2024-11-18 03:16:04.951736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:04.959807] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:01.901 [2024-11-18 03:16:04.964591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:04.964627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:01.901 [2024-11-18 03:16:04.964639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.786 ms 00:16:01.901 [2024-11-18 03:16:04.964648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.010590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.010636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:01.901 [2024-11-18 03:16:05.010651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.915 ms 00:16:01.901 [2024-11-18 03:16:05.010659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.010835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.010848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:01.901 [2024-11-18 03:16:05.010857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:16:01.901 [2024-11-18 03:16:05.010868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.013673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.013706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:01.901 [2024-11-18 03:16:05.013718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:16:01.901 [2024-11-18 03:16:05.013731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.015966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.016110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:01.901 [2024-11-18 03:16:05.016129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:16:01.901 [2024-11-18 03:16:05.016136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.016445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.016464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:01.901 [2024-11-18 03:16:05.016479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:16:01.901 [2024-11-18 03:16:05.016489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.040444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.040481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:01.901 [2024-11-18 03:16:05.040493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.933 ms 00:16:01.901 [2024-11-18 03:16:05.040505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.044039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.044071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:01.901 [2024-11-18 03:16:05.044085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:16:01.901 [2024-11-18 03:16:05.044093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.046843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.046874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:01.901 [2024-11-18 03:16:05.046884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:16:01.901 [2024-11-18 03:16:05.046891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.049836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.049964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:01.901 [2024-11-18 03:16:05.049985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:16:01.901 [2024-11-18 03:16:05.049993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.050016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.050025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:01.901 [2024-11-18 03:16:05.050037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:01.901 [2024-11-18 03:16:05.050047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.050107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.901 [2024-11-18 03:16:05.050116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:01.901 [2024-11-18 03:16:05.050129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:01.901 [2024-11-18 03:16:05.050136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.901 [2024-11-18 03:16:05.050927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2342.132 ms, result 0 00:16:01.901 { 00:16:01.901 "name": "ftl0", 00:16:01.901 "uuid": "950ef73d-6f06-4339-bc52-4d64d5442866" 00:16:01.901 } 00:16:01.901 03:16:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:01.901 03:16:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:01.901 03:16:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:01.901 03:16:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:01.901 [2024-11-18 03:16:05.359766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:01.901 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:01.901 Zero copy mechanism will not be used. 00:16:01.901 Running I/O for 4 seconds... 00:16:04.211 3241.00 IOPS, 215.22 MiB/s [2024-11-18T03:16:08.723Z] 3271.00 IOPS, 217.21 MiB/s [2024-11-18T03:16:09.662Z] 3246.00 IOPS, 215.55 MiB/s [2024-11-18T03:16:09.662Z] 3231.00 IOPS, 214.56 MiB/s 00:16:06.085 Latency(us) 00:16:06.085 [2024-11-18T03:16:09.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.085 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:06.085 ftl0 : 4.00 3229.83 214.48 0.00 0.00 325.61 171.72 2873.50 00:16:06.085 [2024-11-18T03:16:09.662Z] =================================================================================================================== 00:16:06.085 [2024-11-18T03:16:09.662Z] Total : 3229.83 214.48 0.00 0.00 325.61 171.72 2873.50 00:16:06.085 [2024-11-18 03:16:09.367793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:06.085 { 00:16:06.085 "results": [ 00:16:06.085 { 00:16:06.085 "job": "ftl0", 00:16:06.085 "core_mask": "0x1", 00:16:06.085 "workload": "randwrite", 00:16:06.085 "status": "finished", 00:16:06.085 "queue_depth": 1, 00:16:06.085 "io_size": 69632, 00:16:06.085 "runtime": 4.001762, 00:16:06.085 "iops": 3229.827261091489, 00:16:06.085 "mibps": 214.4807165568567, 00:16:06.085 "io_failed": 0, 00:16:06.085 "io_timeout": 0, 00:16:06.086 "avg_latency_us": 325.6096338044934, 00:16:06.086 "min_latency_us": 171.71692307692308, 00:16:06.086 "max_latency_us": 2873.5015384615385 00:16:06.086 } 00:16:06.086 ], 00:16:06.086 "core_count": 1 00:16:06.086 } 00:16:06.086 03:16:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:06.086 [2024-11-18 03:16:09.472402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:06.086 Running I/O for 4 seconds... 00:16:07.954 11499.00 IOPS, 44.92 MiB/s [2024-11-18T03:16:12.519Z] 11310.00 IOPS, 44.18 MiB/s [2024-11-18T03:16:13.895Z] 11249.67 IOPS, 43.94 MiB/s [2024-11-18T03:16:13.895Z] 11185.75 IOPS, 43.69 MiB/s 00:16:10.318 Latency(us) 00:16:10.318 [2024-11-18T03:16:13.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.318 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.318 ftl0 : 4.01 11177.07 43.66 0.00 0.00 11429.36 277.27 32667.18 00:16:10.318 [2024-11-18T03:16:13.895Z] =================================================================================================================== 00:16:10.318 [2024-11-18T03:16:13.895Z] Total : 11177.07 43.66 0.00 0.00 11429.36 0.00 32667.18 00:16:10.318 [2024-11-18 03:16:13.499979] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:16:10.318 "results": [ 00:16:10.318 { 00:16:10.318 "job": "ftl0", 00:16:10.318 "core_mask": "0x1", 00:16:10.318 "workload": "randwrite", 00:16:10.318 "status": "finished", 00:16:10.318 "queue_depth": 128, 00:16:10.318 "io_size": 4096, 00:16:10.318 "runtime": 4.014378, 00:16:10.318 "iops": 11177.074007480112, 00:16:10.318 "mibps": 43.66044534171919, 00:16:10.318 "io_failed": 0, 00:16:10.318 "io_timeout": 0, 00:16:10.318 "avg_latency_us": 11429.356416096774, 00:16:10.318 "min_latency_us": 277.2676923076923, 00:16:10.318 "max_latency_us": 32667.175384615384 00:16:10.318 } 00:16:10.318 ], 00:16:10.318 "core_count": 1 00:16:10.318 } 00:16:10.318 l0 00:16:10.318 03:16:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:10.318 [2024-11-18 03:16:13.601388] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:10.318 Running I/O for 4 seconds... 00:16:12.187 8994.00 IOPS, 35.13 MiB/s [2024-11-18T03:16:16.699Z] 9078.00 IOPS, 35.46 MiB/s [2024-11-18T03:16:17.632Z] 9236.33 IOPS, 36.08 MiB/s [2024-11-18T03:16:17.632Z] 9200.50 IOPS, 35.94 MiB/s 00:16:14.055 Latency(us) 00:16:14.055 [2024-11-18T03:16:17.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.055 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.055 Verification LBA range: start 0x0 length 0x1400000 00:16:14.055 ftl0 : 4.01 9202.82 35.95 0.00 0.00 13854.78 212.68 23290.49 00:16:14.055 [2024-11-18T03:16:17.632Z] =================================================================================================================== 00:16:14.055 [2024-11-18T03:16:17.632Z] Total : 9202.82 35.95 0.00 0.00 13854.78 0.00 23290.49 00:16:14.055 [2024-11-18 03:16:17.627401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:14.055 { 00:16:14.055 "results": [ 00:16:14.055 { 00:16:14.055 "job": "ftl0", 00:16:14.055 "core_mask": "0x1", 00:16:14.055 "workload": "verify", 00:16:14.055 "status": "finished", 00:16:14.055 "verify_range": { 00:16:14.055 "start": 0, 00:16:14.055 "length": 20971520 00:16:14.055 }, 00:16:14.055 "queue_depth": 128, 00:16:14.055 "io_size": 4096, 00:16:14.055 "runtime": 4.012685, 00:16:14.055 "iops": 9202.815571120085, 00:16:14.055 "mibps": 35.94849832468783, 00:16:14.055 "io_failed": 0, 00:16:14.055 "io_timeout": 0, 00:16:14.055 "avg_latency_us": 13854.775501933074, 00:16:14.055 "min_latency_us": 212.6769230769231, 00:16:14.055 "max_latency_us": 23290.486153846156 00:16:14.055 } 00:16:14.055 ], 00:16:14.055 "core_count": 1 00:16:14.055 } 00:16:14.313 03:16:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:14.313 [2024-11-18 03:16:17.827713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.313 [2024-11-18 03:16:17.827871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.313 [2024-11-18 03:16:17.827894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.313 [2024-11-18 03:16:17.827903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.313 [2024-11-18 03:16:17.827929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.313 [2024-11-18 03:16:17.828344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.313 [2024-11-18 03:16:17.828364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.313 [2024-11-18 03:16:17.828373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:16:14.313 [2024-11-18 03:16:17.828387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.313 [2024-11-18 03:16:17.829957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.313 [2024-11-18 03:16:17.830002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.313 [2024-11-18 03:16:17.830011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:16:14.313 [2024-11-18 03:16:17.830023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.961005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.961044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.573 [2024-11-18 03:16:17.961055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.966 ms 00:16:14.573 [2024-11-18 03:16:17.961064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.967224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.967370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:14.573 [2024-11-18 03:16:17.967386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:16:14.573 [2024-11-18 03:16:17.967396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.968443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.968473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.573 [2024-11-18 03:16:17.968482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:16:14.573 [2024-11-18 03:16:17.968491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.972737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.972786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.573 [2024-11-18 03:16:17.972799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:16:14.573 [2024-11-18 03:16:17.972816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.972922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.972934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.573 [2024-11-18 03:16:17.972942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:14.573 [2024-11-18 03:16:17.972951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.974888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.975027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:14.573 [2024-11-18 03:16:17.975042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:16:14.573 [2024-11-18 03:16:17.975051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.976201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.976232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:14.573 [2024-11-18 03:16:17.976241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:16:14.573 [2024-11-18 03:16:17.976250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.977211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.977246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.573 [2024-11-18 03:16:17.977255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:16:14.573 [2024-11-18 03:16:17.977266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.978296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.573 [2024-11-18 03:16:17.978362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.573 [2024-11-18 03:16:17.978371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:16:14.573 [2024-11-18 03:16:17.978380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.573 [2024-11-18 03:16:17.978407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.573 [2024-11-18 03:16:17.978431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.573 [2024-11-18 03:16:17.978673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.978993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.574 [2024-11-18 03:16:17.979278] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.574 [2024-11-18 03:16:17.979286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 950ef73d-6f06-4339-bc52-4d64d5442866 00:16:14.574 [2024-11-18 03:16:17.979295] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.574 [2024-11-18 03:16:17.979303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.574 [2024-11-18 03:16:17.979324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.574 [2024-11-18 03:16:17.979332] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.574 [2024-11-18 03:16:17.979342] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.574 [2024-11-18 03:16:17.979349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.574 [2024-11-18 03:16:17.979358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.574 [2024-11-18 03:16:17.979364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.574 [2024-11-18 03:16:17.979372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.574 [2024-11-18 03:16:17.979379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.574 [2024-11-18 03:16:17.979388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.574 [2024-11-18 03:16:17.979396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:16:14.574 [2024-11-18 03:16:17.979407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.574 [2024-11-18 03:16:17.980712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.574 [2024-11-18 03:16:17.980736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.574 [2024-11-18 03:16:17.980745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:16:14.574 [2024-11-18 03:16:17.980753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.574 [2024-11-18 03:16:17.980834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.574 [2024-11-18 03:16:17.980845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.574 [2024-11-18 03:16:17.980853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:14.574 [2024-11-18 03:16:17.980867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:17.985171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:17.985204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.575 [2024-11-18 03:16:17.985213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:17.985222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:17.985275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:17.985285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.575 [2024-11-18 03:16:17.985293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:17.985301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:17.985364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:17.985376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.575 [2024-11-18 03:16:17.985383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:17.985392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:17.985406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:17.985416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.575 [2024-11-18 03:16:17.985423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:17.985434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:17.993730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:17.993888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.575 [2024-11-18 03:16:17.993903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:17.993912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.575 [2024-11-18 03:16:18.001247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.575 [2024-11-18 03:16:18.001364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.575 [2024-11-18 03:16:18.001419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.575 [2024-11-18 03:16:18.001521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.575 [2024-11-18 03:16:18.001578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.575 [2024-11-18 03:16:18.001640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.575 [2024-11-18 03:16:18.001699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.575 [2024-11-18 03:16:18.001706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.575 [2024-11-18 03:16:18.001716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.575 [2024-11-18 03:16:18.001830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 174.084 ms, result 0 00:16:14.575 true 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84984 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84984 ']' 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84984 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84984 00:16:14.575 killing process with pid 84984 00:16:14.575 Received shutdown signal, test time was about 4.000000 seconds 00:16:14.575 00:16:14.575 Latency(us) 00:16:14.575 [2024-11-18T03:16:18.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.575 [2024-11-18T03:16:18.152Z] =================================================================================================================== 00:16:14.575 [2024-11-18T03:16:18.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84984' 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84984 00:16:14.575 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84984 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:14.833 Remove shared memory files 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:14.833 ************************************ 00:16:14.833 END TEST ftl_bdevperf 00:16:14.833 ************************************ 00:16:14.833 00:16:14.833 real 0m19.427s 00:16:14.833 user 0m22.194s 00:16:14.833 sys 0m0.764s 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.833 03:16:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:14.833 03:16:18 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:14.833 03:16:18 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:14.833 03:16:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.833 03:16:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:14.833 ************************************ 00:16:14.833 START TEST ftl_trim 00:16:14.833 ************************************ 00:16:14.833 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:14.833 * Looking for test storage... 00:16:14.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:14.833 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:14.833 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:14.833 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:15.093 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.093 03:16:18 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:15.093 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.093 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:15.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.093 --rc genhtml_branch_coverage=1 00:16:15.093 --rc genhtml_function_coverage=1 00:16:15.093 --rc genhtml_legend=1 00:16:15.093 --rc geninfo_all_blocks=1 00:16:15.093 --rc geninfo_unexecuted_blocks=1 00:16:15.093 00:16:15.093 ' 00:16:15.093 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:15.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.093 --rc genhtml_branch_coverage=1 00:16:15.093 --rc genhtml_function_coverage=1 00:16:15.093 --rc genhtml_legend=1 00:16:15.093 --rc geninfo_all_blocks=1 00:16:15.093 --rc geninfo_unexecuted_blocks=1 00:16:15.093 00:16:15.093 ' 00:16:15.093 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:15.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.093 --rc genhtml_branch_coverage=1 00:16:15.093 --rc genhtml_function_coverage=1 00:16:15.093 --rc genhtml_legend=1 00:16:15.093 --rc geninfo_all_blocks=1 00:16:15.093 --rc geninfo_unexecuted_blocks=1 00:16:15.093 00:16:15.093 ' 00:16:15.093 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:15.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.093 --rc genhtml_branch_coverage=1 00:16:15.093 --rc genhtml_function_coverage=1 00:16:15.093 --rc genhtml_legend=1 00:16:15.093 --rc geninfo_all_blocks=1 00:16:15.093 --rc geninfo_unexecuted_blocks=1 00:16:15.093 00:16:15.093 ' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85304 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:15.093 03:16:18 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85304 00:16:15.094 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85304 ']' 00:16:15.094 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.094 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.094 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.094 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.094 03:16:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:15.094 [2024-11-18 03:16:18.552952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:15.094 [2024-11-18 03:16:18.553176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85304 ] 00:16:15.352 [2024-11-18 03:16:18.711462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.352 [2024-11-18 03:16:18.745564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.352 [2024-11-18 03:16:18.745936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.352 [2024-11-18 03:16:18.745885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.919 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.919 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:15.919 03:16:19 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:15.919 03:16:19 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:15.919 03:16:19 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:15.919 03:16:19 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:15.919 03:16:19 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:15.919 03:16:19 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:16.177 03:16:19 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:16.177 03:16:19 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:16.177 03:16:19 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:16.177 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:16.177 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:16.177 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:16.177 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:16.177 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:16.437 { 00:16:16.437 "name": "nvme0n1", 00:16:16.437 "aliases": [ 00:16:16.437 "710a15ab-a543-49e2-8308-2e4eb8463df7" 00:16:16.437 ], 00:16:16.437 "product_name": "NVMe disk", 00:16:16.437 "block_size": 4096, 00:16:16.437 "num_blocks": 1310720, 00:16:16.437 "uuid": "710a15ab-a543-49e2-8308-2e4eb8463df7", 00:16:16.437 "numa_id": -1, 00:16:16.437 "assigned_rate_limits": { 00:16:16.437 "rw_ios_per_sec": 0, 00:16:16.437 "rw_mbytes_per_sec": 0, 00:16:16.437 "r_mbytes_per_sec": 0, 00:16:16.437 "w_mbytes_per_sec": 0 00:16:16.437 }, 00:16:16.437 "claimed": true, 00:16:16.437 "claim_type": "read_many_write_one", 00:16:16.437 "zoned": false, 00:16:16.437 "supported_io_types": { 00:16:16.437 "read": true, 00:16:16.437 "write": true, 00:16:16.437 "unmap": true, 00:16:16.437 "flush": true, 00:16:16.437 "reset": true, 00:16:16.437 "nvme_admin": true, 00:16:16.437 "nvme_io": true, 00:16:16.437 "nvme_io_md": false, 00:16:16.437 "write_zeroes": true, 00:16:16.437 "zcopy": false, 00:16:16.437 "get_zone_info": false, 00:16:16.437 "zone_management": false, 00:16:16.437 "zone_append": false, 00:16:16.437 "compare": true, 00:16:16.437 "compare_and_write": false, 00:16:16.437 "abort": true, 00:16:16.437 "seek_hole": false, 00:16:16.437 "seek_data": false, 00:16:16.437 "copy": true, 00:16:16.437 "nvme_iov_md": false 00:16:16.437 }, 00:16:16.437 "driver_specific": { 00:16:16.437 "nvme": [ 00:16:16.437 { 00:16:16.437 "pci_address": "0000:00:11.0", 00:16:16.437 "trid": { 00:16:16.437 "trtype": "PCIe", 00:16:16.437 "traddr": "0000:00:11.0" 00:16:16.437 }, 00:16:16.437 "ctrlr_data": { 00:16:16.437 "cntlid": 0, 00:16:16.437 "vendor_id": "0x1b36", 00:16:16.437 "model_number": "QEMU NVMe Ctrl", 00:16:16.437 "serial_number": "12341", 00:16:16.437 "firmware_revision": "8.0.0", 00:16:16.437 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:16.437 "oacs": { 00:16:16.437 "security": 0, 00:16:16.437 "format": 1, 00:16:16.437 "firmware": 0, 00:16:16.437 "ns_manage": 1 00:16:16.437 }, 00:16:16.437 "multi_ctrlr": false, 00:16:16.437 "ana_reporting": false 00:16:16.437 }, 00:16:16.437 "vs": { 00:16:16.437 "nvme_version": "1.4" 00:16:16.437 }, 00:16:16.437 "ns_data": { 00:16:16.437 "id": 1, 00:16:16.437 "can_share": false 00:16:16.437 } 00:16:16.437 } 00:16:16.437 ], 00:16:16.437 "mp_policy": "active_passive" 00:16:16.437 } 00:16:16.437 } 00:16:16.437 ]' 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:16.437 03:16:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:16.437 03:16:19 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:16.437 03:16:19 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:16.437 03:16:19 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:16.437 03:16:19 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:16.437 03:16:19 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:16.697 03:16:20 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=198bab9b-d36b-41aa-827e-47fbea5d2356 00:16:16.697 03:16:20 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:16.697 03:16:20 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 198bab9b-d36b-41aa-827e-47fbea5d2356 00:16:16.956 03:16:20 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:17.214 03:16:20 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.214 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.214 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:17.214 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:17.214 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:17.214 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.473 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:17.473 { 00:16:17.473 "name": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:17.473 "aliases": [ 00:16:17.473 "lvs/nvme0n1p0" 00:16:17.473 ], 00:16:17.473 "product_name": "Logical Volume", 00:16:17.473 "block_size": 4096, 00:16:17.473 "num_blocks": 26476544, 00:16:17.473 "uuid": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:17.473 "assigned_rate_limits": { 00:16:17.473 "rw_ios_per_sec": 0, 00:16:17.473 "rw_mbytes_per_sec": 0, 00:16:17.473 "r_mbytes_per_sec": 0, 00:16:17.473 "w_mbytes_per_sec": 0 00:16:17.473 }, 00:16:17.473 "claimed": false, 00:16:17.473 "zoned": false, 00:16:17.473 "supported_io_types": { 00:16:17.473 "read": true, 00:16:17.473 "write": true, 00:16:17.473 "unmap": true, 00:16:17.473 "flush": false, 00:16:17.473 "reset": true, 00:16:17.473 "nvme_admin": false, 00:16:17.473 "nvme_io": false, 00:16:17.473 "nvme_io_md": false, 00:16:17.473 "write_zeroes": true, 00:16:17.473 "zcopy": false, 00:16:17.473 "get_zone_info": false, 00:16:17.473 "zone_management": false, 00:16:17.473 "zone_append": false, 00:16:17.474 "compare": false, 00:16:17.474 "compare_and_write": false, 00:16:17.474 "abort": false, 00:16:17.474 "seek_hole": true, 00:16:17.474 "seek_data": true, 00:16:17.474 "copy": false, 00:16:17.474 "nvme_iov_md": false 00:16:17.474 }, 00:16:17.474 "driver_specific": { 00:16:17.474 "lvol": { 00:16:17.474 "lvol_store_uuid": "6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8", 00:16:17.474 "base_bdev": "nvme0n1", 00:16:17.474 "thin_provision": true, 00:16:17.474 "num_allocated_clusters": 0, 00:16:17.474 "snapshot": false, 00:16:17.474 "clone": false, 00:16:17.474 "esnap_clone": false 00:16:17.474 } 00:16:17.474 } 00:16:17.474 } 00:16:17.474 ]' 00:16:17.474 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:17.474 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:17.474 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:17.474 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:17.474 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:17.474 03:16:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:17.474 03:16:20 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:17.474 03:16:20 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:17.474 03:16:20 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:17.734 03:16:21 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:17.734 03:16:21 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:17.734 03:16:21 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.735 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.735 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:17.735 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:17.735 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:17.735 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:17.994 { 00:16:17.994 "name": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:17.994 "aliases": [ 00:16:17.994 "lvs/nvme0n1p0" 00:16:17.994 ], 00:16:17.994 "product_name": "Logical Volume", 00:16:17.994 "block_size": 4096, 00:16:17.994 "num_blocks": 26476544, 00:16:17.994 "uuid": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:17.994 "assigned_rate_limits": { 00:16:17.994 "rw_ios_per_sec": 0, 00:16:17.994 "rw_mbytes_per_sec": 0, 00:16:17.994 "r_mbytes_per_sec": 0, 00:16:17.994 "w_mbytes_per_sec": 0 00:16:17.994 }, 00:16:17.994 "claimed": false, 00:16:17.994 "zoned": false, 00:16:17.994 "supported_io_types": { 00:16:17.994 "read": true, 00:16:17.994 "write": true, 00:16:17.994 "unmap": true, 00:16:17.994 "flush": false, 00:16:17.994 "reset": true, 00:16:17.994 "nvme_admin": false, 00:16:17.994 "nvme_io": false, 00:16:17.994 "nvme_io_md": false, 00:16:17.994 "write_zeroes": true, 00:16:17.994 "zcopy": false, 00:16:17.994 "get_zone_info": false, 00:16:17.994 "zone_management": false, 00:16:17.994 "zone_append": false, 00:16:17.994 "compare": false, 00:16:17.994 "compare_and_write": false, 00:16:17.994 "abort": false, 00:16:17.994 "seek_hole": true, 00:16:17.994 "seek_data": true, 00:16:17.994 "copy": false, 00:16:17.994 "nvme_iov_md": false 00:16:17.994 }, 00:16:17.994 "driver_specific": { 00:16:17.994 "lvol": { 00:16:17.994 "lvol_store_uuid": "6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8", 00:16:17.994 "base_bdev": "nvme0n1", 00:16:17.994 "thin_provision": true, 00:16:17.994 "num_allocated_clusters": 0, 00:16:17.994 "snapshot": false, 00:16:17.994 "clone": false, 00:16:17.994 "esnap_clone": false 00:16:17.994 } 00:16:17.994 } 00:16:17.994 } 00:16:17.994 ]' 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:17.994 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:17.994 03:16:21 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:17.994 03:16:21 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:18.255 03:16:21 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:18.255 03:16:21 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:18.255 03:16:21 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:18.255 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:18.255 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:18.255 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:18.255 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:18.255 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d234f438-5ab4-430d-b6dd-2436458dcadf 00:16:18.515 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:18.515 { 00:16:18.515 "name": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:18.515 "aliases": [ 00:16:18.515 "lvs/nvme0n1p0" 00:16:18.515 ], 00:16:18.515 "product_name": "Logical Volume", 00:16:18.515 "block_size": 4096, 00:16:18.515 "num_blocks": 26476544, 00:16:18.515 "uuid": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:18.515 "assigned_rate_limits": { 00:16:18.515 "rw_ios_per_sec": 0, 00:16:18.515 "rw_mbytes_per_sec": 0, 00:16:18.516 "r_mbytes_per_sec": 0, 00:16:18.516 "w_mbytes_per_sec": 0 00:16:18.516 }, 00:16:18.516 "claimed": false, 00:16:18.516 "zoned": false, 00:16:18.516 "supported_io_types": { 00:16:18.516 "read": true, 00:16:18.516 "write": true, 00:16:18.516 "unmap": true, 00:16:18.516 "flush": false, 00:16:18.516 "reset": true, 00:16:18.516 "nvme_admin": false, 00:16:18.516 "nvme_io": false, 00:16:18.516 "nvme_io_md": false, 00:16:18.516 "write_zeroes": true, 00:16:18.516 "zcopy": false, 00:16:18.516 "get_zone_info": false, 00:16:18.516 "zone_management": false, 00:16:18.516 "zone_append": false, 00:16:18.516 "compare": false, 00:16:18.516 "compare_and_write": false, 00:16:18.516 "abort": false, 00:16:18.516 "seek_hole": true, 00:16:18.516 "seek_data": true, 00:16:18.516 "copy": false, 00:16:18.516 "nvme_iov_md": false 00:16:18.516 }, 00:16:18.516 "driver_specific": { 00:16:18.516 "lvol": { 00:16:18.516 "lvol_store_uuid": "6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8", 00:16:18.516 "base_bdev": "nvme0n1", 00:16:18.516 "thin_provision": true, 00:16:18.516 "num_allocated_clusters": 0, 00:16:18.516 "snapshot": false, 00:16:18.516 "clone": false, 00:16:18.516 "esnap_clone": false 00:16:18.516 } 00:16:18.516 } 00:16:18.516 } 00:16:18.516 ]' 00:16:18.516 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:18.516 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:18.516 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:18.516 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:18.516 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:18.516 03:16:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:18.516 03:16:21 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:18.516 03:16:21 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d234f438-5ab4-430d-b6dd-2436458dcadf -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:18.776 [2024-11-18 03:16:22.156495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.156553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:18.776 [2024-11-18 03:16:22.156569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:18.776 [2024-11-18 03:16:22.156579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.159543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.159583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:18.776 [2024-11-18 03:16:22.159593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:16:18.776 [2024-11-18 03:16:22.159605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.159732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:18.776 [2024-11-18 03:16:22.159990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:18.776 [2024-11-18 03:16:22.160005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.160015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:18.776 [2024-11-18 03:16:22.160026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:16:18.776 [2024-11-18 03:16:22.160035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.160137] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:16:18.776 [2024-11-18 03:16:22.161542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.161575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:18.776 [2024-11-18 03:16:22.161587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:18.776 [2024-11-18 03:16:22.161594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.168731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.168763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:18.776 [2024-11-18 03:16:22.168774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:16:18.776 [2024-11-18 03:16:22.168782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.168912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.168924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:18.776 [2024-11-18 03:16:22.168934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:18.776 [2024-11-18 03:16:22.168953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.168995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.169006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:18.776 [2024-11-18 03:16:22.169017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:18.776 [2024-11-18 03:16:22.169024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.169062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:18.776 [2024-11-18 03:16:22.170849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.171020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:18.776 [2024-11-18 03:16:22.171036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.794 ms 00:16:18.776 [2024-11-18 03:16:22.171046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.171100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.171114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:18.776 [2024-11-18 03:16:22.171124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:18.776 [2024-11-18 03:16:22.171140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.171174] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:18.776 [2024-11-18 03:16:22.171330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:18.776 [2024-11-18 03:16:22.171346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:18.776 [2024-11-18 03:16:22.171359] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:18.776 [2024-11-18 03:16:22.171370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:18.776 [2024-11-18 03:16:22.171382] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:18.776 [2024-11-18 03:16:22.171390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:18.776 [2024-11-18 03:16:22.171411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:18.776 [2024-11-18 03:16:22.171420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:18.776 [2024-11-18 03:16:22.171431] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:18.776 [2024-11-18 03:16:22.171439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.171448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:18.776 [2024-11-18 03:16:22.171456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:16:18.776 [2024-11-18 03:16:22.171468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.171562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.776 [2024-11-18 03:16:22.171576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:18.776 [2024-11-18 03:16:22.171585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:18.776 [2024-11-18 03:16:22.171595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.776 [2024-11-18 03:16:22.171733] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:18.776 [2024-11-18 03:16:22.171746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:18.776 [2024-11-18 03:16:22.171757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.776 [2024-11-18 03:16:22.171768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.776 [2024-11-18 03:16:22.171778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:18.776 [2024-11-18 03:16:22.171789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:18.776 [2024-11-18 03:16:22.171796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:18.776 [2024-11-18 03:16:22.171806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:18.776 [2024-11-18 03:16:22.171814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:18.776 [2024-11-18 03:16:22.171824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.776 [2024-11-18 03:16:22.171832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:18.777 [2024-11-18 03:16:22.171841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:18.777 [2024-11-18 03:16:22.171851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.777 [2024-11-18 03:16:22.171864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:18.777 [2024-11-18 03:16:22.171872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:18.777 [2024-11-18 03:16:22.171882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.777 [2024-11-18 03:16:22.171890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:18.777 [2024-11-18 03:16:22.171900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:18.777 [2024-11-18 03:16:22.171908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.777 [2024-11-18 03:16:22.171918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:18.777 [2024-11-18 03:16:22.171926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:18.777 [2024-11-18 03:16:22.171935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.777 [2024-11-18 03:16:22.171942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:18.777 [2024-11-18 03:16:22.171950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:18.777 [2024-11-18 03:16:22.171956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.777 [2024-11-18 03:16:22.171965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:18.777 [2024-11-18 03:16:22.171972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:18.777 [2024-11-18 03:16:22.171980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.777 [2024-11-18 03:16:22.171986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:18.777 [2024-11-18 03:16:22.171997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:18.777 [2024-11-18 03:16:22.172003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.777 [2024-11-18 03:16:22.172012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:18.777 [2024-11-18 03:16:22.172018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:18.777 [2024-11-18 03:16:22.172027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.777 [2024-11-18 03:16:22.172035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:18.777 [2024-11-18 03:16:22.172045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:18.777 [2024-11-18 03:16:22.172051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.777 [2024-11-18 03:16:22.172059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:18.777 [2024-11-18 03:16:22.172067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:18.777 [2024-11-18 03:16:22.172076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.777 [2024-11-18 03:16:22.172083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:18.777 [2024-11-18 03:16:22.172091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:18.777 [2024-11-18 03:16:22.172097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.777 [2024-11-18 03:16:22.172105] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:18.777 [2024-11-18 03:16:22.172123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:18.777 [2024-11-18 03:16:22.172134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.777 [2024-11-18 03:16:22.172142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.777 [2024-11-18 03:16:22.172152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:18.777 [2024-11-18 03:16:22.172158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:18.777 [2024-11-18 03:16:22.172166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:18.777 [2024-11-18 03:16:22.172174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:18.777 [2024-11-18 03:16:22.172182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:18.777 [2024-11-18 03:16:22.172188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:18.777 [2024-11-18 03:16:22.172200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:18.777 [2024-11-18 03:16:22.172210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:18.777 [2024-11-18 03:16:22.172227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:18.777 [2024-11-18 03:16:22.172236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:18.777 [2024-11-18 03:16:22.172244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:18.777 [2024-11-18 03:16:22.172252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:18.777 [2024-11-18 03:16:22.172259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:18.777 [2024-11-18 03:16:22.172271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:18.777 [2024-11-18 03:16:22.172279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:18.777 [2024-11-18 03:16:22.172287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:18.777 [2024-11-18 03:16:22.172294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:18.777 [2024-11-18 03:16:22.172349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:18.777 [2024-11-18 03:16:22.172357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:18.777 [2024-11-18 03:16:22.172375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:18.777 [2024-11-18 03:16:22.172385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:18.777 [2024-11-18 03:16:22.172392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:18.777 [2024-11-18 03:16:22.172401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.777 [2024-11-18 03:16:22.172409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:18.777 [2024-11-18 03:16:22.172432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:16:18.777 [2024-11-18 03:16:22.172439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.777 [2024-11-18 03:16:22.172529] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:18.777 [2024-11-18 03:16:22.172539] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:21.316 [2024-11-18 03:16:24.563681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.563937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:21.316 [2024-11-18 03:16:24.564067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2391.133 ms 00:16:21.316 [2024-11-18 03:16:24.564130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.584572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.584809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.316 [2024-11-18 03:16:24.585001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.260 ms 00:16:21.316 [2024-11-18 03:16:24.585096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.585373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.585443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:21.316 [2024-11-18 03:16:24.585538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:16:21.316 [2024-11-18 03:16:24.585576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.597610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.597720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.316 [2024-11-18 03:16:24.597778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.970 ms 00:16:21.316 [2024-11-18 03:16:24.597801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.597886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.597958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.316 [2024-11-18 03:16:24.597985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:21.316 [2024-11-18 03:16:24.598004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.598459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.598563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.316 [2024-11-18 03:16:24.598614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:16:21.316 [2024-11-18 03:16:24.598664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.598817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.598881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.316 [2024-11-18 03:16:24.598908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:21.316 [2024-11-18 03:16:24.598949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.605826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.605921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.316 [2024-11-18 03:16:24.605974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.830 ms 00:16:21.316 [2024-11-18 03:16:24.605997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.615325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:21.316 [2024-11-18 03:16:24.632682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.632798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:21.316 [2024-11-18 03:16:24.632850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.275 ms 00:16:21.316 [2024-11-18 03:16:24.632876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.696908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.697079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:21.316 [2024-11-18 03:16:24.697163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.657 ms 00:16:21.316 [2024-11-18 03:16:24.697194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.697420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.697454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:21.316 [2024-11-18 03:16:24.697566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:16:21.316 [2024-11-18 03:16:24.697593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.701179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.701287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:21.316 [2024-11-18 03:16:24.701400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.528 ms 00:16:21.316 [2024-11-18 03:16:24.701427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.316 [2024-11-18 03:16:24.704571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.316 [2024-11-18 03:16:24.704673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:21.316 [2024-11-18 03:16:24.704735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:16:21.317 [2024-11-18 03:16:24.704758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.705076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.705149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:21.317 [2024-11-18 03:16:24.705201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:21.317 [2024-11-18 03:16:24.705216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.736528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.736572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:21.317 [2024-11-18 03:16:24.736595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.273 ms 00:16:21.317 [2024-11-18 03:16:24.736616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.741217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.741257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:21.317 [2024-11-18 03:16:24.741267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.535 ms 00:16:21.317 [2024-11-18 03:16:24.741279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.744578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.744612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:21.317 [2024-11-18 03:16:24.744620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:16:21.317 [2024-11-18 03:16:24.744629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.747824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.747956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:21.317 [2024-11-18 03:16:24.747972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.151 ms 00:16:21.317 [2024-11-18 03:16:24.747984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.748047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.748061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:21.317 [2024-11-18 03:16:24.748070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:21.317 [2024-11-18 03:16:24.748082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.748163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.317 [2024-11-18 03:16:24.748175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:21.317 [2024-11-18 03:16:24.748184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:21.317 [2024-11-18 03:16:24.748192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.317 [2024-11-18 03:16:24.749075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.317 [2024-11-18 03:16:24.750070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2592.306 ms, result 0 00:16:21.317 [2024-11-18 03:16:24.751156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:21.317 { 00:16:21.317 "name": "ftl0", 00:16:21.317 "uuid": "e65d81cb-0fb4-4410-9e48-4b07019a2d5e" 00:16:21.317 } 00:16:21.317 03:16:24 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:21.317 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:21.317 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:21.317 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:21.317 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:21.317 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:21.317 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:21.578 03:16:24 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:21.838 [ 00:16:21.838 { 00:16:21.838 "name": "ftl0", 00:16:21.838 "aliases": [ 00:16:21.838 "e65d81cb-0fb4-4410-9e48-4b07019a2d5e" 00:16:21.838 ], 00:16:21.839 "product_name": "FTL disk", 00:16:21.839 "block_size": 4096, 00:16:21.839 "num_blocks": 23592960, 00:16:21.839 "uuid": "e65d81cb-0fb4-4410-9e48-4b07019a2d5e", 00:16:21.839 "assigned_rate_limits": { 00:16:21.839 "rw_ios_per_sec": 0, 00:16:21.839 "rw_mbytes_per_sec": 0, 00:16:21.839 "r_mbytes_per_sec": 0, 00:16:21.839 "w_mbytes_per_sec": 0 00:16:21.839 }, 00:16:21.839 "claimed": false, 00:16:21.839 "zoned": false, 00:16:21.839 "supported_io_types": { 00:16:21.839 "read": true, 00:16:21.839 "write": true, 00:16:21.839 "unmap": true, 00:16:21.839 "flush": true, 00:16:21.839 "reset": false, 00:16:21.839 "nvme_admin": false, 00:16:21.839 "nvme_io": false, 00:16:21.839 "nvme_io_md": false, 00:16:21.839 "write_zeroes": true, 00:16:21.839 "zcopy": false, 00:16:21.839 "get_zone_info": false, 00:16:21.839 "zone_management": false, 00:16:21.839 "zone_append": false, 00:16:21.839 "compare": false, 00:16:21.839 "compare_and_write": false, 00:16:21.839 "abort": false, 00:16:21.839 "seek_hole": false, 00:16:21.839 "seek_data": false, 00:16:21.839 "copy": false, 00:16:21.839 "nvme_iov_md": false 00:16:21.839 }, 00:16:21.839 "driver_specific": { 00:16:21.839 "ftl": { 00:16:21.839 "base_bdev": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:21.839 "cache": "nvc0n1p0" 00:16:21.839 } 00:16:21.839 } 00:16:21.839 } 00:16:21.839 ] 00:16:21.839 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:21.839 03:16:25 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:21.839 03:16:25 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:21.839 03:16:25 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:21.839 03:16:25 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:22.099 03:16:25 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:22.099 { 00:16:22.099 "name": "ftl0", 00:16:22.099 "aliases": [ 00:16:22.099 "e65d81cb-0fb4-4410-9e48-4b07019a2d5e" 00:16:22.099 ], 00:16:22.099 "product_name": "FTL disk", 00:16:22.099 "block_size": 4096, 00:16:22.099 "num_blocks": 23592960, 00:16:22.099 "uuid": "e65d81cb-0fb4-4410-9e48-4b07019a2d5e", 00:16:22.099 "assigned_rate_limits": { 00:16:22.099 "rw_ios_per_sec": 0, 00:16:22.099 "rw_mbytes_per_sec": 0, 00:16:22.099 "r_mbytes_per_sec": 0, 00:16:22.099 "w_mbytes_per_sec": 0 00:16:22.099 }, 00:16:22.099 "claimed": false, 00:16:22.099 "zoned": false, 00:16:22.099 "supported_io_types": { 00:16:22.099 "read": true, 00:16:22.099 "write": true, 00:16:22.099 "unmap": true, 00:16:22.099 "flush": true, 00:16:22.099 "reset": false, 00:16:22.099 "nvme_admin": false, 00:16:22.100 "nvme_io": false, 00:16:22.100 "nvme_io_md": false, 00:16:22.100 "write_zeroes": true, 00:16:22.100 "zcopy": false, 00:16:22.100 "get_zone_info": false, 00:16:22.100 "zone_management": false, 00:16:22.100 "zone_append": false, 00:16:22.100 "compare": false, 00:16:22.100 "compare_and_write": false, 00:16:22.100 "abort": false, 00:16:22.100 "seek_hole": false, 00:16:22.100 "seek_data": false, 00:16:22.100 "copy": false, 00:16:22.100 "nvme_iov_md": false 00:16:22.100 }, 00:16:22.100 "driver_specific": { 00:16:22.100 "ftl": { 00:16:22.100 "base_bdev": "d234f438-5ab4-430d-b6dd-2436458dcadf", 00:16:22.100 "cache": "nvc0n1p0" 00:16:22.100 } 00:16:22.100 } 00:16:22.100 } 00:16:22.100 ]' 00:16:22.100 03:16:25 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:22.100 03:16:25 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:22.100 03:16:25 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:22.362 [2024-11-18 03:16:25.795073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.795129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:22.362 [2024-11-18 03:16:25.795144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:22.362 [2024-11-18 03:16:25.795152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.795198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:22.362 [2024-11-18 03:16:25.795759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.795783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:22.362 [2024-11-18 03:16:25.795792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:16:22.362 [2024-11-18 03:16:25.795803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.796386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.796422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:22.362 [2024-11-18 03:16:25.796431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:16:22.362 [2024-11-18 03:16:25.796445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.800107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.800133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:22.362 [2024-11-18 03:16:25.800144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:16:22.362 [2024-11-18 03:16:25.800155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.807178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.807211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:22.362 [2024-11-18 03:16:25.807220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:16:22.362 [2024-11-18 03:16:25.807232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.809084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.809135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:22.362 [2024-11-18 03:16:25.809147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:16:22.362 [2024-11-18 03:16:25.809159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.813556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.813594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:22.362 [2024-11-18 03:16:25.813605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:16:22.362 [2024-11-18 03:16:25.813628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.813817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.813830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:22.362 [2024-11-18 03:16:25.813845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:16:22.362 [2024-11-18 03:16:25.813855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.815645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.815678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:22.362 [2024-11-18 03:16:25.815687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:16:22.362 [2024-11-18 03:16:25.815699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.816990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.817149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:22.362 [2024-11-18 03:16:25.817164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:16:22.362 [2024-11-18 03:16:25.817174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.818083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.818112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:22.362 [2024-11-18 03:16:25.818121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:16:22.362 [2024-11-18 03:16:25.818130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.819416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.362 [2024-11-18 03:16:25.819449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:22.362 [2024-11-18 03:16:25.819458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:16:22.362 [2024-11-18 03:16:25.819467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.362 [2024-11-18 03:16:25.819529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:22.362 [2024-11-18 03:16:25.819548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:22.362 [2024-11-18 03:16:25.819763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.819992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.820970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:22.363 [2024-11-18 03:16:25.821224] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:22.363 [2024-11-18 03:16:25.821245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:16:22.363 [2024-11-18 03:16:25.821322] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:22.363 [2024-11-18 03:16:25.821345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:22.363 [2024-11-18 03:16:25.821367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:22.363 [2024-11-18 03:16:25.821387] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:22.363 [2024-11-18 03:16:25.821407] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:22.363 [2024-11-18 03:16:25.821459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:22.364 [2024-11-18 03:16:25.821480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:22.364 [2024-11-18 03:16:25.821498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:22.364 [2024-11-18 03:16:25.821519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:22.364 [2024-11-18 03:16:25.821564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.364 [2024-11-18 03:16:25.821590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:22.364 [2024-11-18 03:16:25.821639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:16:22.364 [2024-11-18 03:16:25.821665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.823585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.364 [2024-11-18 03:16:25.823685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:22.364 [2024-11-18 03:16:25.823700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:16:22.364 [2024-11-18 03:16:25.823713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.823833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.364 [2024-11-18 03:16:25.823859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:22.364 [2024-11-18 03:16:25.823870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:22.364 [2024-11-18 03:16:25.823880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.830376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.830412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.364 [2024-11-18 03:16:25.830422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.830435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.830534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.830546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.364 [2024-11-18 03:16:25.830554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.830565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.830629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.830641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.364 [2024-11-18 03:16:25.830649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.830659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.830693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.830702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.364 [2024-11-18 03:16:25.830710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.830719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.842743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.842788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.364 [2024-11-18 03:16:25.842800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.842813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.852560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.852602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.364 [2024-11-18 03:16:25.852613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.852626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.852686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.852698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.364 [2024-11-18 03:16:25.852707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.852716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.852782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.852793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.364 [2024-11-18 03:16:25.852801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.852811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.852900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.852926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.364 [2024-11-18 03:16:25.852935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.852945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.853014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.853029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:22.364 [2024-11-18 03:16:25.853037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.853048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.853105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.853117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.364 [2024-11-18 03:16:25.853125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.853134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.853209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.364 [2024-11-18 03:16:25.853224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.364 [2024-11-18 03:16:25.853234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.364 [2024-11-18 03:16:25.853244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.364 [2024-11-18 03:16:25.853461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.374 ms, result 0 00:16:22.364 true 00:16:22.364 03:16:25 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85304 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85304 ']' 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85304 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85304 00:16:22.364 killing process with pid 85304 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85304' 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85304 00:16:22.364 03:16:25 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85304 00:16:27.653 03:16:30 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:28.598 65536+0 records in 00:16:28.598 65536+0 records out 00:16:28.598 268435456 bytes (268 MB, 256 MiB) copied, 1.09605 s, 245 MB/s 00:16:28.598 03:16:31 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:28.598 [2024-11-18 03:16:31.907867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:28.598 [2024-11-18 03:16:31.908022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85474 ] 00:16:28.598 [2024-11-18 03:16:32.054572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.598 [2024-11-18 03:16:32.127085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.859 [2024-11-18 03:16:32.275114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:28.859 [2024-11-18 03:16:32.275220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:29.121 [2024-11-18 03:16:32.438177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.121 [2024-11-18 03:16:32.438233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:29.121 [2024-11-18 03:16:32.438248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:29.121 [2024-11-18 03:16:32.438256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.121 [2024-11-18 03:16:32.440638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.121 [2024-11-18 03:16:32.440828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.121 [2024-11-18 03:16:32.440853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.364 ms 00:16:29.121 [2024-11-18 03:16:32.440862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.440936] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:29.122 [2024-11-18 03:16:32.441183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:29.122 [2024-11-18 03:16:32.441201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.441209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.122 [2024-11-18 03:16:32.441221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:29.122 [2024-11-18 03:16:32.441229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.442759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:29.122 [2024-11-18 03:16:32.445525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.445562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:29.122 [2024-11-18 03:16:32.445573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:16:29.122 [2024-11-18 03:16:32.445584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.445642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.445653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:29.122 [2024-11-18 03:16:32.445661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:29.122 [2024-11-18 03:16:32.445668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.452624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.452653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.122 [2024-11-18 03:16:32.452663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:16:29.122 [2024-11-18 03:16:32.452675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.452783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.452795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.122 [2024-11-18 03:16:32.452804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:29.122 [2024-11-18 03:16:32.452811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.452847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.452861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:29.122 [2024-11-18 03:16:32.452870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:29.122 [2024-11-18 03:16:32.452877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.452900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:29.122 [2024-11-18 03:16:32.454627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.454660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.122 [2024-11-18 03:16:32.454670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:16:29.122 [2024-11-18 03:16:32.454682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.454728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.454740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:29.122 [2024-11-18 03:16:32.454754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:29.122 [2024-11-18 03:16:32.454761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.454780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:29.122 [2024-11-18 03:16:32.454799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:29.122 [2024-11-18 03:16:32.454834] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:29.122 [2024-11-18 03:16:32.454853] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:29.122 [2024-11-18 03:16:32.454962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:29.122 [2024-11-18 03:16:32.454973] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:29.122 [2024-11-18 03:16:32.454988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:29.122 [2024-11-18 03:16:32.454999] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455016] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:29.122 [2024-11-18 03:16:32.455024] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:29.122 [2024-11-18 03:16:32.455031] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:29.122 [2024-11-18 03:16:32.455038] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:29.122 [2024-11-18 03:16:32.455046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.455057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:29.122 [2024-11-18 03:16:32.455067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:16:29.122 [2024-11-18 03:16:32.455075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.455166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.122 [2024-11-18 03:16:32.455176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:29.122 [2024-11-18 03:16:32.455184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:29.122 [2024-11-18 03:16:32.455191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.122 [2024-11-18 03:16:32.455295] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:29.122 [2024-11-18 03:16:32.455328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:29.122 [2024-11-18 03:16:32.455338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:29.122 [2024-11-18 03:16:32.455373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:29.122 [2024-11-18 03:16:32.455418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.122 [2024-11-18 03:16:32.455436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:29.122 [2024-11-18 03:16:32.455447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:29.122 [2024-11-18 03:16:32.455456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.122 [2024-11-18 03:16:32.455464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:29.122 [2024-11-18 03:16:32.455472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:29.122 [2024-11-18 03:16:32.455480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:29.122 [2024-11-18 03:16:32.455497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:29.122 [2024-11-18 03:16:32.455524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:29.122 [2024-11-18 03:16:32.455548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:29.122 [2024-11-18 03:16:32.455576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:29.122 [2024-11-18 03:16:32.455600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:29.122 [2024-11-18 03:16:32.455621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.122 [2024-11-18 03:16:32.455635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:29.122 [2024-11-18 03:16:32.455642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:29.122 [2024-11-18 03:16:32.455648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.122 [2024-11-18 03:16:32.455655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:29.122 [2024-11-18 03:16:32.455662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:29.122 [2024-11-18 03:16:32.455669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:29.122 [2024-11-18 03:16:32.455684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:29.122 [2024-11-18 03:16:32.455691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.122 [2024-11-18 03:16:32.455698] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:29.122 [2024-11-18 03:16:32.455705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:29.122 [2024-11-18 03:16:32.455712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.122 [2024-11-18 03:16:32.455719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.123 [2024-11-18 03:16:32.455728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:29.123 [2024-11-18 03:16:32.455735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:29.123 [2024-11-18 03:16:32.455742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:29.123 [2024-11-18 03:16:32.455749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:29.123 [2024-11-18 03:16:32.455755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:29.123 [2024-11-18 03:16:32.455763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:29.123 [2024-11-18 03:16:32.455771] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:29.123 [2024-11-18 03:16:32.455781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:29.123 [2024-11-18 03:16:32.455804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:29.123 [2024-11-18 03:16:32.455811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:29.123 [2024-11-18 03:16:32.455818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:29.123 [2024-11-18 03:16:32.455826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:29.123 [2024-11-18 03:16:32.455834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:29.123 [2024-11-18 03:16:32.455841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:29.123 [2024-11-18 03:16:32.455848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:29.123 [2024-11-18 03:16:32.455855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:29.123 [2024-11-18 03:16:32.455862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:29.123 [2024-11-18 03:16:32.455897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:29.123 [2024-11-18 03:16:32.455907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:29.123 [2024-11-18 03:16:32.455929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:29.123 [2024-11-18 03:16:32.455938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:29.123 [2024-11-18 03:16:32.455945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:29.123 [2024-11-18 03:16:32.455952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.455960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:29.123 [2024-11-18 03:16:32.455970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:16:29.123 [2024-11-18 03:16:32.455980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.476835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.476897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:29.123 [2024-11-18 03:16:32.476915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.799 ms 00:16:29.123 [2024-11-18 03:16:32.476932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.477127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.477151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:29.123 [2024-11-18 03:16:32.477165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:29.123 [2024-11-18 03:16:32.477180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.488866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.488901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:29.123 [2024-11-18 03:16:32.488911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.651 ms 00:16:29.123 [2024-11-18 03:16:32.488920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.488980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.488990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:29.123 [2024-11-18 03:16:32.489005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:29.123 [2024-11-18 03:16:32.489013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.489473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.489490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:29.123 [2024-11-18 03:16:32.489507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:16:29.123 [2024-11-18 03:16:32.489516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.489664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.489675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:29.123 [2024-11-18 03:16:32.489684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:29.123 [2024-11-18 03:16:32.489697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.496249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.496447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:29.123 [2024-11-18 03:16:32.496463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.529 ms 00:16:29.123 [2024-11-18 03:16:32.496471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.499849] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:29.123 [2024-11-18 03:16:32.499975] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:29.123 [2024-11-18 03:16:32.499999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.500007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:29.123 [2024-11-18 03:16:32.500016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.444 ms 00:16:29.123 [2024-11-18 03:16:32.500024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.516298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.516337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:29.123 [2024-11-18 03:16:32.516349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.218 ms 00:16:29.123 [2024-11-18 03:16:32.516358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.518634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.518665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:29.123 [2024-11-18 03:16:32.518674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:16:29.123 [2024-11-18 03:16:32.518682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.520794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.520825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:29.123 [2024-11-18 03:16:32.520841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:16:29.123 [2024-11-18 03:16:32.520847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.521207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.521220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:29.123 [2024-11-18 03:16:32.521228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:16:29.123 [2024-11-18 03:16:32.521236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.541851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.541898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:29.123 [2024-11-18 03:16:32.541910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.591 ms 00:16:29.123 [2024-11-18 03:16:32.541927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.549651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:29.123 [2024-11-18 03:16:32.567892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.567934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:29.123 [2024-11-18 03:16:32.567946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.882 ms 00:16:29.123 [2024-11-18 03:16:32.567954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.568046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.568057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:29.123 [2024-11-18 03:16:32.568067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:29.123 [2024-11-18 03:16:32.568074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.568131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.568144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:29.123 [2024-11-18 03:16:32.568153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:29.123 [2024-11-18 03:16:32.568160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.123 [2024-11-18 03:16:32.568183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.123 [2024-11-18 03:16:32.568192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:29.124 [2024-11-18 03:16:32.568200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:29.124 [2024-11-18 03:16:32.568208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.124 [2024-11-18 03:16:32.568248] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:29.124 [2024-11-18 03:16:32.568260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.124 [2024-11-18 03:16:32.568268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:29.124 [2024-11-18 03:16:32.568287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:29.124 [2024-11-18 03:16:32.568295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.124 [2024-11-18 03:16:32.572995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.124 [2024-11-18 03:16:32.573157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:29.124 [2024-11-18 03:16:32.573174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:16:29.124 [2024-11-18 03:16:32.573184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.124 [2024-11-18 03:16:32.573265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.124 [2024-11-18 03:16:32.573275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:29.124 [2024-11-18 03:16:32.573288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:29.124 [2024-11-18 03:16:32.573296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.124 [2024-11-18 03:16:32.574258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.124 [2024-11-18 03:16:32.575366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.766 ms, result 0 00:16:29.124 [2024-11-18 03:16:32.576585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:29.124 [2024-11-18 03:16:32.584465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:30.066  [2024-11-18T03:16:34.651Z] Copying: 19/256 [MB] (19 MBps) [2024-11-18T03:16:35.615Z] Copying: 37/256 [MB] (18 MBps) [2024-11-18T03:16:37.003Z] Copying: 56/256 [MB] (18 MBps) [2024-11-18T03:16:37.947Z] Copying: 73/256 [MB] (17 MBps) [2024-11-18T03:16:38.892Z] Copying: 92/256 [MB] (19 MBps) [2024-11-18T03:16:39.840Z] Copying: 112/256 [MB] (19 MBps) [2024-11-18T03:16:40.782Z] Copying: 128/256 [MB] (15 MBps) [2024-11-18T03:16:41.716Z] Copying: 146/256 [MB] (18 MBps) [2024-11-18T03:16:42.650Z] Copying: 165/256 [MB] (18 MBps) [2024-11-18T03:16:44.024Z] Copying: 184/256 [MB] (19 MBps) [2024-11-18T03:16:44.589Z] Copying: 207/256 [MB] (23 MBps) [2024-11-18T03:16:45.964Z] Copying: 231/256 [MB] (23 MBps) [2024-11-18T03:16:45.964Z] Copying: 253/256 [MB] (21 MBps) [2024-11-18T03:16:45.964Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-18 03:16:45.752741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.387 [2024-11-18 03:16:45.754113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.754149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:42.387 [2024-11-18 03:16:45.754162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:42.387 [2024-11-18 03:16:45.754172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.754189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:42.387 [2024-11-18 03:16:45.754755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.754774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:42.387 [2024-11-18 03:16:45.754788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:16:42.387 [2024-11-18 03:16:45.754795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.756463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.756489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:42.387 [2024-11-18 03:16:45.756497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:16:42.387 [2024-11-18 03:16:45.756503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.762189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.762220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:42.387 [2024-11-18 03:16:45.762228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.672 ms 00:16:42.387 [2024-11-18 03:16:45.762234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.767489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.767618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:42.387 [2024-11-18 03:16:45.767638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.217 ms 00:16:42.387 [2024-11-18 03:16:45.767644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.769622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.769649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:42.387 [2024-11-18 03:16:45.769657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:16:42.387 [2024-11-18 03:16:45.769663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.773511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.773543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:42.387 [2024-11-18 03:16:45.773550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:16:42.387 [2024-11-18 03:16:45.773560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.773644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.773651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:42.387 [2024-11-18 03:16:45.773658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:42.387 [2024-11-18 03:16:45.773663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.776393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.776418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:42.387 [2024-11-18 03:16:45.776425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.716 ms 00:16:42.387 [2024-11-18 03:16:45.776430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.778581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.778686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:42.387 [2024-11-18 03:16:45.778698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:16:42.387 [2024-11-18 03:16:45.778703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.780412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.780437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:42.387 [2024-11-18 03:16:45.780444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.684 ms 00:16:42.387 [2024-11-18 03:16:45.780449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.782256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-18 03:16:45.782368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:42.387 [2024-11-18 03:16:45.782380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:16:42.387 [2024-11-18 03:16:45.782386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-18 03:16:45.782433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:42.387 [2024-11-18 03:16:45.782452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:42.387 [2024-11-18 03:16:45.782464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:42.388 [2024-11-18 03:16:45.782969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.782975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.782981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.782986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.782992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.782997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:42.389 [2024-11-18 03:16:45.783054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:42.389 [2024-11-18 03:16:45.783060] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:16:42.389 [2024-11-18 03:16:45.783067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:42.389 [2024-11-18 03:16:45.783080] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:42.389 [2024-11-18 03:16:45.783086] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:42.389 [2024-11-18 03:16:45.783092] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:42.389 [2024-11-18 03:16:45.783097] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:42.389 [2024-11-18 03:16:45.783104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:42.389 [2024-11-18 03:16:45.783110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:42.389 [2024-11-18 03:16:45.783115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:42.389 [2024-11-18 03:16:45.783120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:42.389 [2024-11-18 03:16:45.783125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.389 [2024-11-18 03:16:45.783131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:42.389 [2024-11-18 03:16:45.783138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:16:42.389 [2024-11-18 03:16:45.783147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.784866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.389 [2024-11-18 03:16:45.784884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:42.389 [2024-11-18 03:16:45.784891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:16:42.389 [2024-11-18 03:16:45.784897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.784988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.389 [2024-11-18 03:16:45.784995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:42.389 [2024-11-18 03:16:45.785006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:42.389 [2024-11-18 03:16:45.785012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.790633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.790661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.389 [2024-11-18 03:16:45.790669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.790675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.790722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.790729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.389 [2024-11-18 03:16:45.790739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.790745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.790778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.790785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.389 [2024-11-18 03:16:45.790790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.790796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.790813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.790821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.389 [2024-11-18 03:16:45.790827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.790835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.801754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.801786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.389 [2024-11-18 03:16:45.801796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.801803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.389 [2024-11-18 03:16:45.810564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.389 [2024-11-18 03:16:45.810612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.389 [2024-11-18 03:16:45.810656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.389 [2024-11-18 03:16:45.810739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:42.389 [2024-11-18 03:16:45.810790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.389 [2024-11-18 03:16:45.810849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.810895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.389 [2024-11-18 03:16:45.810905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.389 [2024-11-18 03:16:45.810912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.389 [2024-11-18 03:16:45.810918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.389 [2024-11-18 03:16:45.811046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.910 ms, result 0 00:16:42.955 00:16:42.955 00:16:42.955 03:16:46 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85627 00:16:42.955 03:16:46 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85627 00:16:42.955 03:16:46 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85627 ']' 00:16:42.955 03:16:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.955 03:16:46 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:42.955 03:16:46 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.955 03:16:46 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.955 03:16:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.955 03:16:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:42.955 [2024-11-18 03:16:46.354059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:42.955 [2024-11-18 03:16:46.354350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85627 ] 00:16:42.955 [2024-11-18 03:16:46.498009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.213 [2024-11-18 03:16:46.539336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.780 03:16:47 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.780 03:16:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:43.780 03:16:47 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:44.039 [2024-11-18 03:16:47.388659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.039 [2024-11-18 03:16:47.388855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.039 [2024-11-18 03:16:47.559580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.559623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.040 [2024-11-18 03:16:47.559638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:44.040 [2024-11-18 03:16:47.559646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.561549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.561582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.040 [2024-11-18 03:16:47.561590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:16:44.040 [2024-11-18 03:16:47.561598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.561660] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.040 [2024-11-18 03:16:47.561895] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.040 [2024-11-18 03:16:47.561907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.561914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.040 [2024-11-18 03:16:47.561922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:16:44.040 [2024-11-18 03:16:47.561928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.563286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:44.040 [2024-11-18 03:16:47.566040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.566211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:44.040 [2024-11-18 03:16:47.566228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:16:44.040 [2024-11-18 03:16:47.566235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.566284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.566292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:44.040 [2024-11-18 03:16:47.566302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:44.040 [2024-11-18 03:16:47.566308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.572567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.572683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.040 [2024-11-18 03:16:47.572697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.198 ms 00:16:44.040 [2024-11-18 03:16:47.572704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.572782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.572791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.040 [2024-11-18 03:16:47.572799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:44.040 [2024-11-18 03:16:47.572805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.572828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.572834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.040 [2024-11-18 03:16:47.572842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:44.040 [2024-11-18 03:16:47.572851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.572873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:44.040 [2024-11-18 03:16:47.574462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.574489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.040 [2024-11-18 03:16:47.574497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:16:44.040 [2024-11-18 03:16:47.574508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.574546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.574558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.040 [2024-11-18 03:16:47.574567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:44.040 [2024-11-18 03:16:47.574574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.574595] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:44.040 [2024-11-18 03:16:47.574613] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:44.040 [2024-11-18 03:16:47.574646] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:44.040 [2024-11-18 03:16:47.574662] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:44.040 [2024-11-18 03:16:47.574746] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.040 [2024-11-18 03:16:47.574759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.040 [2024-11-18 03:16:47.574767] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.040 [2024-11-18 03:16:47.574777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.040 [2024-11-18 03:16:47.574784] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.040 [2024-11-18 03:16:47.574794] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:44.040 [2024-11-18 03:16:47.574800] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.040 [2024-11-18 03:16:47.574808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.040 [2024-11-18 03:16:47.574814] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.040 [2024-11-18 03:16:47.574821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.574830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.040 [2024-11-18 03:16:47.574837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:44.040 [2024-11-18 03:16:47.574843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.574912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.040 [2024-11-18 03:16:47.574919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.040 [2024-11-18 03:16:47.574927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:44.040 [2024-11-18 03:16:47.574932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.040 [2024-11-18 03:16:47.575012] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.040 [2024-11-18 03:16:47.575020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.040 [2024-11-18 03:16:47.575030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.040 [2024-11-18 03:16:47.575052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.040 [2024-11-18 03:16:47.575079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.040 [2024-11-18 03:16:47.575096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.040 [2024-11-18 03:16:47.575101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:44.040 [2024-11-18 03:16:47.575108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.040 [2024-11-18 03:16:47.575114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.040 [2024-11-18 03:16:47.575123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:44.040 [2024-11-18 03:16:47.575129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.040 [2024-11-18 03:16:47.575145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.040 [2024-11-18 03:16:47.575168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.040 [2024-11-18 03:16:47.575188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.040 [2024-11-18 03:16:47.575208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.040 [2024-11-18 03:16:47.575229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.040 [2024-11-18 03:16:47.575242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.040 [2024-11-18 03:16:47.575250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:44.040 [2024-11-18 03:16:47.575256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.040 [2024-11-18 03:16:47.575264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.040 [2024-11-18 03:16:47.575269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:44.040 [2024-11-18 03:16:47.575278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.040 [2024-11-18 03:16:47.575285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.040 [2024-11-18 03:16:47.575293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:44.040 [2024-11-18 03:16:47.575299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.041 [2024-11-18 03:16:47.575306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.041 [2024-11-18 03:16:47.575334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:44.041 [2024-11-18 03:16:47.575343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.041 [2024-11-18 03:16:47.575349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.041 [2024-11-18 03:16:47.575359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.041 [2024-11-18 03:16:47.575366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.041 [2024-11-18 03:16:47.575374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.041 [2024-11-18 03:16:47.575380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.041 [2024-11-18 03:16:47.575388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.041 [2024-11-18 03:16:47.575394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.041 [2024-11-18 03:16:47.575402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.041 [2024-11-18 03:16:47.575408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.041 [2024-11-18 03:16:47.575417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.041 [2024-11-18 03:16:47.575425] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.041 [2024-11-18 03:16:47.575435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:44.041 [2024-11-18 03:16:47.575450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:44.041 [2024-11-18 03:16:47.575457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:44.041 [2024-11-18 03:16:47.575465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:44.041 [2024-11-18 03:16:47.575472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:44.041 [2024-11-18 03:16:47.575480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:44.041 [2024-11-18 03:16:47.575486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:44.041 [2024-11-18 03:16:47.575494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:44.041 [2024-11-18 03:16:47.575500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:44.041 [2024-11-18 03:16:47.575507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:44.041 [2024-11-18 03:16:47.575539] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.041 [2024-11-18 03:16:47.575547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.041 [2024-11-18 03:16:47.575563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.041 [2024-11-18 03:16:47.575569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.041 [2024-11-18 03:16:47.575576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.041 [2024-11-18 03:16:47.575582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.575590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.041 [2024-11-18 03:16:47.575596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:16:44.041 [2024-11-18 03:16:47.575603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.041 [2024-11-18 03:16:47.587209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.587246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.041 [2024-11-18 03:16:47.587256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.563 ms 00:16:44.041 [2024-11-18 03:16:47.587264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.041 [2024-11-18 03:16:47.587384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.587396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.041 [2024-11-18 03:16:47.587405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:44.041 [2024-11-18 03:16:47.587413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.041 [2024-11-18 03:16:47.597023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.597054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.041 [2024-11-18 03:16:47.597062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.590 ms 00:16:44.041 [2024-11-18 03:16:47.597070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.041 [2024-11-18 03:16:47.597105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.597116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.041 [2024-11-18 03:16:47.597123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:44.041 [2024-11-18 03:16:47.597130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.041 [2024-11-18 03:16:47.597542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.597565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.041 [2024-11-18 03:16:47.597573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:16:44.041 [2024-11-18 03:16:47.597581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.041 [2024-11-18 03:16:47.597713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.041 [2024-11-18 03:16:47.597731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.041 [2024-11-18 03:16:47.597741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:16:44.041 [2024-11-18 03:16:47.597750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.621177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.621221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.300 [2024-11-18 03:16:47.621233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.407 ms 00:16:44.300 [2024-11-18 03:16:47.621244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.624397] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:44.300 [2024-11-18 03:16:47.624434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:44.300 [2024-11-18 03:16:47.624446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.624457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:44.300 [2024-11-18 03:16:47.624466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:16:44.300 [2024-11-18 03:16:47.624475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.637226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.637258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:44.300 [2024-11-18 03:16:47.637268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.706 ms 00:16:44.300 [2024-11-18 03:16:47.637277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.639248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.639394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:44.300 [2024-11-18 03:16:47.639407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:16:44.300 [2024-11-18 03:16:47.639415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.641035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.641065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:44.300 [2024-11-18 03:16:47.641072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:16:44.300 [2024-11-18 03:16:47.641080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.641357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.641371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.300 [2024-11-18 03:16:47.641379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:16:44.300 [2024-11-18 03:16:47.641390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.659990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.660029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:44.300 [2024-11-18 03:16:47.660038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.571 ms 00:16:44.300 [2024-11-18 03:16:47.660049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.666178] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.300 [2024-11-18 03:16:47.680812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.680843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.300 [2024-11-18 03:16:47.680856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.709 ms 00:16:44.300 [2024-11-18 03:16:47.680862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.680936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.680944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:44.300 [2024-11-18 03:16:47.680952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:44.300 [2024-11-18 03:16:47.680960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.681007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.681015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.300 [2024-11-18 03:16:47.681026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:44.300 [2024-11-18 03:16:47.681031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.681052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.681059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:44.300 [2024-11-18 03:16:47.681071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:44.300 [2024-11-18 03:16:47.681077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.681107] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:44.300 [2024-11-18 03:16:47.681114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.681122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:44.300 [2024-11-18 03:16:47.681128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:44.300 [2024-11-18 03:16:47.681134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.685440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.685472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:44.300 [2024-11-18 03:16:47.685481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:16:44.300 [2024-11-18 03:16:47.685489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.685555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.300 [2024-11-18 03:16:47.685564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:44.300 [2024-11-18 03:16:47.685572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:44.300 [2024-11-18 03:16:47.685583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.300 [2024-11-18 03:16:47.686386] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.300 [2024-11-18 03:16:47.687217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.517 ms, result 0 00:16:44.301 [2024-11-18 03:16:47.689156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.301 Some configs were skipped because the RPC state that can call them passed over. 00:16:44.301 03:16:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:44.559 [2024-11-18 03:16:47.911757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.559 [2024-11-18 03:16:47.911887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:44.559 [2024-11-18 03:16:47.911936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:16:44.559 [2024-11-18 03:16:47.911956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.559 [2024-11-18 03:16:47.911997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.805 ms, result 0 00:16:44.559 true 00:16:44.559 03:16:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:44.559 [2024-11-18 03:16:48.112063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.559 [2024-11-18 03:16:48.112173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:44.559 [2024-11-18 03:16:48.112216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:16:44.559 [2024-11-18 03:16:48.112236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.559 [2024-11-18 03:16:48.112275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.857 ms, result 0 00:16:44.559 true 00:16:44.559 03:16:48 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85627 00:16:44.559 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85627 ']' 00:16:44.559 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85627 00:16:44.559 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:44.559 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.559 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85627 00:16:44.818 killing process with pid 85627 00:16:44.818 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.818 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.818 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85627' 00:16:44.818 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85627 00:16:44.818 03:16:48 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85627 00:16:44.818 [2024-11-18 03:16:48.266697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.818 [2024-11-18 03:16:48.266747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:44.818 [2024-11-18 03:16:48.266760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:44.818 [2024-11-18 03:16:48.266766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.818 [2024-11-18 03:16:48.266787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:44.819 [2024-11-18 03:16:48.267329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.267349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:44.819 [2024-11-18 03:16:48.267358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:16:44.819 [2024-11-18 03:16:48.267366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.267613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.267625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:44.819 [2024-11-18 03:16:48.267633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:16:44.819 [2024-11-18 03:16:48.267641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.271250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.271281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.819 [2024-11-18 03:16:48.271289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:16:44.819 [2024-11-18 03:16:48.271300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.276531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.276561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.819 [2024-11-18 03:16:48.276568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.192 ms 00:16:44.819 [2024-11-18 03:16:48.276578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.278779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.278811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.819 [2024-11-18 03:16:48.278818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:16:44.819 [2024-11-18 03:16:48.278825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.282969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.283000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.819 [2024-11-18 03:16:48.283008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:16:44.819 [2024-11-18 03:16:48.283019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.283122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.283131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.819 [2024-11-18 03:16:48.283138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:44.819 [2024-11-18 03:16:48.283146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.285608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.285760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.819 [2024-11-18 03:16:48.285772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:16:44.819 [2024-11-18 03:16:48.285784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.287431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.287461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.819 [2024-11-18 03:16:48.287467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:16:44.819 [2024-11-18 03:16:48.287474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.288873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.288964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.819 [2024-11-18 03:16:48.288976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:16:44.819 [2024-11-18 03:16:48.288983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.290585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.819 [2024-11-18 03:16:48.290615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.819 [2024-11-18 03:16:48.290622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:16:44.819 [2024-11-18 03:16:48.290629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.819 [2024-11-18 03:16:48.290656] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.819 [2024-11-18 03:16:48.290670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.290999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.291008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.291014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.291021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.291027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.291035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.819 [2024-11-18 03:16:48.291042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.820 [2024-11-18 03:16:48.291412] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.820 [2024-11-18 03:16:48.291419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:16:44.820 [2024-11-18 03:16:48.291427] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.820 [2024-11-18 03:16:48.291436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.820 [2024-11-18 03:16:48.291443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.820 [2024-11-18 03:16:48.291452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.820 [2024-11-18 03:16:48.291460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.820 [2024-11-18 03:16:48.291467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.820 [2024-11-18 03:16:48.291478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.820 [2024-11-18 03:16:48.291482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.820 [2024-11-18 03:16:48.291491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.820 [2024-11-18 03:16:48.291498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.820 [2024-11-18 03:16:48.291505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.820 [2024-11-18 03:16:48.291511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:16:44.820 [2024-11-18 03:16:48.291520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.293221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.820 [2024-11-18 03:16:48.293246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.820 [2024-11-18 03:16:48.293254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:16:44.820 [2024-11-18 03:16:48.293262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.293379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.820 [2024-11-18 03:16:48.293391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.820 [2024-11-18 03:16:48.293398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:44.820 [2024-11-18 03:16:48.293409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.299531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.299664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.820 [2024-11-18 03:16:48.299676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.820 [2024-11-18 03:16:48.299684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.299755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.299765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.820 [2024-11-18 03:16:48.299772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.820 [2024-11-18 03:16:48.299783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.299818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.299829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.820 [2024-11-18 03:16:48.299836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.820 [2024-11-18 03:16:48.299845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.299859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.299867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.820 [2024-11-18 03:16:48.299874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.820 [2024-11-18 03:16:48.299882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.311120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.311270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.820 [2024-11-18 03:16:48.311284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.820 [2024-11-18 03:16:48.311292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.320051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.320088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.820 [2024-11-18 03:16:48.320096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.820 [2024-11-18 03:16:48.320108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.820 [2024-11-18 03:16:48.320150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.820 [2024-11-18 03:16:48.320160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.821 [2024-11-18 03:16:48.320167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.821 [2024-11-18 03:16:48.320177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.821 [2024-11-18 03:16:48.320205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.821 [2024-11-18 03:16:48.320213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.821 [2024-11-18 03:16:48.320220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.821 [2024-11-18 03:16:48.320228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.821 [2024-11-18 03:16:48.320289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.821 [2024-11-18 03:16:48.320300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.821 [2024-11-18 03:16:48.320306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.821 [2024-11-18 03:16:48.320332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.821 [2024-11-18 03:16:48.320360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.821 [2024-11-18 03:16:48.320369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.821 [2024-11-18 03:16:48.320376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.821 [2024-11-18 03:16:48.320385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.821 [2024-11-18 03:16:48.320423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.821 [2024-11-18 03:16:48.320433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.821 [2024-11-18 03:16:48.320439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.821 [2024-11-18 03:16:48.320447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.821 [2024-11-18 03:16:48.320491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.821 [2024-11-18 03:16:48.320501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.821 [2024-11-18 03:16:48.320508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.821 [2024-11-18 03:16:48.320521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.821 [2024-11-18 03:16:48.320647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.927 ms, result 0 00:16:45.079 03:16:48 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:45.079 03:16:48 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:45.079 [2024-11-18 03:16:48.599575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:45.079 [2024-11-18 03:16:48.599811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85663 ] 00:16:45.338 [2024-11-18 03:16:48.745522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.338 [2024-11-18 03:16:48.786701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.338 [2024-11-18 03:16:48.886422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.338 [2024-11-18 03:16:48.886485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.598 [2024-11-18 03:16:49.041014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.041058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.598 [2024-11-18 03:16:49.041069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.598 [2024-11-18 03:16:49.041079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.043186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.043219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.598 [2024-11-18 03:16:49.043229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:16:45.598 [2024-11-18 03:16:49.043235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.043295] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.598 [2024-11-18 03:16:49.043492] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.598 [2024-11-18 03:16:49.043506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.043513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.598 [2024-11-18 03:16:49.043525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:16:45.598 [2024-11-18 03:16:49.043531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.044832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:45.598 [2024-11-18 03:16:49.047796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.047830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:45.598 [2024-11-18 03:16:49.047838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.965 ms 00:16:45.598 [2024-11-18 03:16:49.047846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.047900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.047911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:45.598 [2024-11-18 03:16:49.047918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:45.598 [2024-11-18 03:16:49.047924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.054235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.054261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.598 [2024-11-18 03:16:49.054268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.281 ms 00:16:45.598 [2024-11-18 03:16:49.054274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.054384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.054395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.598 [2024-11-18 03:16:49.054402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:16:45.598 [2024-11-18 03:16:49.054408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.054442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.054458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.598 [2024-11-18 03:16:49.054464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:45.598 [2024-11-18 03:16:49.054470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.054487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.598 [2024-11-18 03:16:49.056028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.056053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.598 [2024-11-18 03:16:49.056061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:16:45.598 [2024-11-18 03:16:49.056067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.056109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.056120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.598 [2024-11-18 03:16:49.056128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:45.598 [2024-11-18 03:16:49.056134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.056153] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:45.598 [2024-11-18 03:16:49.056170] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:45.598 [2024-11-18 03:16:49.056200] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:45.598 [2024-11-18 03:16:49.056218] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:45.598 [2024-11-18 03:16:49.056302] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:45.598 [2024-11-18 03:16:49.056325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.598 [2024-11-18 03:16:49.056333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:45.598 [2024-11-18 03:16:49.056342] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.598 [2024-11-18 03:16:49.056351] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.598 [2024-11-18 03:16:49.056358] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.598 [2024-11-18 03:16:49.056365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.598 [2024-11-18 03:16:49.056371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:45.598 [2024-11-18 03:16:49.056380] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:45.598 [2024-11-18 03:16:49.056386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.056397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.598 [2024-11-18 03:16:49.056406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:16:45.598 [2024-11-18 03:16:49.056415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.056483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.598 [2024-11-18 03:16:49.056490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.598 [2024-11-18 03:16:49.056497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:45.598 [2024-11-18 03:16:49.056503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.598 [2024-11-18 03:16:49.056583] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.598 [2024-11-18 03:16:49.056597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.598 [2024-11-18 03:16:49.056607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.598 [2024-11-18 03:16:49.056616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.598 [2024-11-18 03:16:49.056627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.598 [2024-11-18 03:16:49.056640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.598 [2024-11-18 03:16:49.056648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.598 [2024-11-18 03:16:49.056660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.598 [2024-11-18 03:16:49.056665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.598 [2024-11-18 03:16:49.056670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.598 [2024-11-18 03:16:49.056675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.598 [2024-11-18 03:16:49.056683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:45.598 [2024-11-18 03:16:49.056689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.598 [2024-11-18 03:16:49.056702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:45.598 [2024-11-18 03:16:49.056708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.598 [2024-11-18 03:16:49.056721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.598 [2024-11-18 03:16:49.056733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.598 [2024-11-18 03:16:49.056739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.598 [2024-11-18 03:16:49.056749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.599 [2024-11-18 03:16:49.056755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.599 [2024-11-18 03:16:49.056761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.599 [2024-11-18 03:16:49.056767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.599 [2024-11-18 03:16:49.056773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.599 [2024-11-18 03:16:49.056779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:45.599 [2024-11-18 03:16:49.056785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.599 [2024-11-18 03:16:49.056792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.599 [2024-11-18 03:16:49.056797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:45.599 [2024-11-18 03:16:49.056803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.599 [2024-11-18 03:16:49.056809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.599 [2024-11-18 03:16:49.056815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:45.599 [2024-11-18 03:16:49.056822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.599 [2024-11-18 03:16:49.056828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:45.599 [2024-11-18 03:16:49.056833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:45.599 [2024-11-18 03:16:49.056839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.599 [2024-11-18 03:16:49.056847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:45.599 [2024-11-18 03:16:49.056853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:45.599 [2024-11-18 03:16:49.056860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.599 [2024-11-18 03:16:49.056866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.599 [2024-11-18 03:16:49.056875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.599 [2024-11-18 03:16:49.056881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.599 [2024-11-18 03:16:49.056888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.599 [2024-11-18 03:16:49.056895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.599 [2024-11-18 03:16:49.056904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.599 [2024-11-18 03:16:49.056910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.599 [2024-11-18 03:16:49.056916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.599 [2024-11-18 03:16:49.056922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.599 [2024-11-18 03:16:49.056929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.599 [2024-11-18 03:16:49.056936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.599 [2024-11-18 03:16:49.056944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.056951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.599 [2024-11-18 03:16:49.056959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:45.599 [2024-11-18 03:16:49.056967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.599 [2024-11-18 03:16:49.056973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:45.599 [2024-11-18 03:16:49.056979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:45.599 [2024-11-18 03:16:49.056986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:45.599 [2024-11-18 03:16:49.056992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:45.599 [2024-11-18 03:16:49.057000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:45.599 [2024-11-18 03:16:49.057006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:45.599 [2024-11-18 03:16:49.057012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.057018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.057025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.057031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.057038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:45.599 [2024-11-18 03:16:49.057044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.599 [2024-11-18 03:16:49.057054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.057062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.599 [2024-11-18 03:16:49.057069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.599 [2024-11-18 03:16:49.057075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.599 [2024-11-18 03:16:49.057080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.599 [2024-11-18 03:16:49.057085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.057091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.599 [2024-11-18 03:16:49.057099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:16:45.599 [2024-11-18 03:16:49.057104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.076286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.076480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.599 [2024-11-18 03:16:49.076558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.141 ms 00:16:45.599 [2024-11-18 03:16:49.076589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.076777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.076881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.599 [2024-11-18 03:16:49.076911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:45.599 [2024-11-18 03:16:49.076942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.087902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.088002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.599 [2024-11-18 03:16:49.088040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.916 ms 00:16:45.599 [2024-11-18 03:16:49.088062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.088124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.088147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.599 [2024-11-18 03:16:49.088165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:45.599 [2024-11-18 03:16:49.088180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.088601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.088636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.599 [2024-11-18 03:16:49.088653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:16:45.599 [2024-11-18 03:16:49.088668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.088791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.088868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.599 [2024-11-18 03:16:49.088887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:16:45.599 [2024-11-18 03:16:49.088905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.094782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.094874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.599 [2024-11-18 03:16:49.094913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.848 ms 00:16:45.599 [2024-11-18 03:16:49.094930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.097910] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:45.599 [2024-11-18 03:16:49.098015] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:45.599 [2024-11-18 03:16:49.098071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.098088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:45.599 [2024-11-18 03:16:49.098103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:16:45.599 [2024-11-18 03:16:49.098117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.109655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.109748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:45.599 [2024-11-18 03:16:49.109789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.484 ms 00:16:45.599 [2024-11-18 03:16:49.109808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.111580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.111667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:45.599 [2024-11-18 03:16:49.111706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:16:45.599 [2024-11-18 03:16:49.111723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.599 [2024-11-18 03:16:49.113365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.599 [2024-11-18 03:16:49.113449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:45.599 [2024-11-18 03:16:49.113494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:16:45.599 [2024-11-18 03:16:49.113511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.113955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.114127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.600 [2024-11-18 03:16:49.114154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:45.600 [2024-11-18 03:16:49.114174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.132774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.132899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:45.600 [2024-11-18 03:16:49.132944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.554 ms 00:16:45.600 [2024-11-18 03:16:49.132963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.146724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.600 [2024-11-18 03:16:49.163518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.163640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.600 [2024-11-18 03:16:49.163686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.243 ms 00:16:45.600 [2024-11-18 03:16:49.163704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.163804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.163831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:45.600 [2024-11-18 03:16:49.163849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:45.600 [2024-11-18 03:16:49.163867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.163932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.164037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.600 [2024-11-18 03:16:49.164048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:45.600 [2024-11-18 03:16:49.164054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.164077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.164089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.600 [2024-11-18 03:16:49.164096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.600 [2024-11-18 03:16:49.164102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.164132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:45.600 [2024-11-18 03:16:49.164145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.164151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:45.600 [2024-11-18 03:16:49.164158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:45.600 [2024-11-18 03:16:49.164172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.168554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.168585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.600 [2024-11-18 03:16:49.168594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:16:45.600 [2024-11-18 03:16:49.168600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.168670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.600 [2024-11-18 03:16:49.168681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.600 [2024-11-18 03:16:49.168688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:45.600 [2024-11-18 03:16:49.168694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.600 [2024-11-18 03:16:49.169489] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.600 [2024-11-18 03:16:49.170343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.198 ms, result 0 00:16:45.857 [2024-11-18 03:16:49.171510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.857 [2024-11-18 03:16:49.180419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.790  [2024-11-18T03:16:51.302Z] Copying: 23/256 [MB] (23 MBps) [2024-11-18T03:16:52.237Z] Copying: 45/256 [MB] (22 MBps) [2024-11-18T03:16:53.246Z] Copying: 67/256 [MB] (21 MBps) [2024-11-18T03:16:54.623Z] Copying: 91/256 [MB] (23 MBps) [2024-11-18T03:16:55.189Z] Copying: 103/256 [MB] (12 MBps) [2024-11-18T03:16:56.640Z] Copying: 115/256 [MB] (11 MBps) [2024-11-18T03:16:57.210Z] Copying: 126/256 [MB] (11 MBps) [2024-11-18T03:16:58.585Z] Copying: 138/256 [MB] (11 MBps) [2024-11-18T03:16:59.523Z] Copying: 149/256 [MB] (11 MBps) [2024-11-18T03:17:00.462Z] Copying: 161/256 [MB] (12 MBps) [2024-11-18T03:17:01.398Z] Copying: 173/256 [MB] (12 MBps) [2024-11-18T03:17:02.334Z] Copying: 187/256 [MB] (14 MBps) [2024-11-18T03:17:03.275Z] Copying: 199/256 [MB] (11 MBps) [2024-11-18T03:17:04.210Z] Copying: 213/256 [MB] (14 MBps) [2024-11-18T03:17:05.589Z] Copying: 225/256 [MB] (11 MBps) [2024-11-18T03:17:06.523Z] Copying: 235/256 [MB] (10 MBps) [2024-11-18T03:17:07.096Z] Copying: 247/256 [MB] (11 MBps) [2024-11-18T03:17:07.096Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-18 03:17:06.941469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.519 [2024-11-18 03:17:06.944069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.944328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:03.519 [2024-11-18 03:17:06.944366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.519 [2024-11-18 03:17:06.944376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.944412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:03.519 [2024-11-18 03:17:06.945411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.945464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:03.519 [2024-11-18 03:17:06.945477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:17:03.519 [2024-11-18 03:17:06.945488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.945778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.945795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:03.519 [2024-11-18 03:17:06.945805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:03.519 [2024-11-18 03:17:06.945814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.949619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.949654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:03.519 [2024-11-18 03:17:06.949666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:17:03.519 [2024-11-18 03:17:06.949675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.956771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.956821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:03.519 [2024-11-18 03:17:06.956834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.058 ms 00:17:03.519 [2024-11-18 03:17:06.956843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.960197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.960429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:03.519 [2024-11-18 03:17:06.960450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:17:03.519 [2024-11-18 03:17:06.960474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.966910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.966973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:03.519 [2024-11-18 03:17:06.966995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.252 ms 00:17:03.519 [2024-11-18 03:17:06.967004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.967151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.967163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:03.519 [2024-11-18 03:17:06.967174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:03.519 [2024-11-18 03:17:06.967183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.970873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.971081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:03.519 [2024-11-18 03:17:06.971101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.670 ms 00:17:03.519 [2024-11-18 03:17:06.971110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.974270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.974502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:03.519 [2024-11-18 03:17:06.974524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:17:03.519 [2024-11-18 03:17:06.974532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.977030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.977082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:03.519 [2024-11-18 03:17:06.977093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:17:03.519 [2024-11-18 03:17:06.977100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.979430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.519 [2024-11-18 03:17:06.979482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:03.519 [2024-11-18 03:17:06.979492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:17:03.519 [2024-11-18 03:17:06.979498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.519 [2024-11-18 03:17:06.979547] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:03.519 [2024-11-18 03:17:06.979575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:03.519 [2024-11-18 03:17:06.979829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.979997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:03.520 [2024-11-18 03:17:06.980437] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:03.520 [2024-11-18 03:17:06.980448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:17:03.520 [2024-11-18 03:17:06.980464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:03.520 [2024-11-18 03:17:06.980472] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:03.520 [2024-11-18 03:17:06.980480] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:03.520 [2024-11-18 03:17:06.980494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:03.520 [2024-11-18 03:17:06.980503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:03.520 [2024-11-18 03:17:06.980511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:03.520 [2024-11-18 03:17:06.980519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:03.520 [2024-11-18 03:17:06.980526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:03.520 [2024-11-18 03:17:06.980532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:03.520 [2024-11-18 03:17:06.980541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.520 [2024-11-18 03:17:06.980550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:03.520 [2024-11-18 03:17:06.980563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:17:03.520 [2024-11-18 03:17:06.980578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.520 [2024-11-18 03:17:06.983959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.520 [2024-11-18 03:17:06.983996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:03.520 [2024-11-18 03:17:06.984006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.343 ms 00:17:03.520 [2024-11-18 03:17:06.984014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.520 [2024-11-18 03:17:06.984175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.520 [2024-11-18 03:17:06.984194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:03.520 [2024-11-18 03:17:06.984209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:03.520 [2024-11-18 03:17:06.984217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.520 [2024-11-18 03:17:06.994716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:06.994886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.521 [2024-11-18 03:17:06.994943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:06.994967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:06.995080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:06.995110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.521 [2024-11-18 03:17:06.995130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:06.995150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:06.995215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:06.995241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.521 [2024-11-18 03:17:06.995263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:06.995369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:06.995412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:06.995581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.521 [2024-11-18 03:17:06.995650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:06.995675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.014955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.015171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.521 [2024-11-18 03:17:07.015232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.015257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.029856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.030061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.521 [2024-11-18 03:17:07.030122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.030148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.030225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.030251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.521 [2024-11-18 03:17:07.030274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.030295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.030364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.030505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.521 [2024-11-18 03:17:07.030544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.030585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.030749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.030787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.521 [2024-11-18 03:17:07.030803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.030817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.030871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.030888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:03.521 [2024-11-18 03:17:07.030903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.030916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.030995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.031009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.521 [2024-11-18 03:17:07.031019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.031028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.031089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.521 [2024-11-18 03:17:07.031103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.521 [2024-11-18 03:17:07.031112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.521 [2024-11-18 03:17:07.031124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.521 [2024-11-18 03:17:07.031304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.200 ms, result 0 00:17:03.782 00:17:03.782 00:17:03.782 03:17:07 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:03.782 03:17:07 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:04.356 03:17:07 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:04.617 [2024-11-18 03:17:07.977259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:04.617 [2024-11-18 03:17:07.977419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85867 ] 00:17:04.617 [2024-11-18 03:17:08.130949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.879 [2024-11-18 03:17:08.205853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.879 [2024-11-18 03:17:08.359084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.879 [2024-11-18 03:17:08.359186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.142 [2024-11-18 03:17:08.523255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.523346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.142 [2024-11-18 03:17:08.523364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.142 [2024-11-18 03:17:08.523374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.526103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.526163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.142 [2024-11-18 03:17:08.526178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:17:05.142 [2024-11-18 03:17:08.526187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.526299] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.142 [2024-11-18 03:17:08.526684] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.142 [2024-11-18 03:17:08.526713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.526727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.142 [2024-11-18 03:17:08.526749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:17:05.142 [2024-11-18 03:17:08.526763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.529273] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:05.142 [2024-11-18 03:17:08.534356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.534432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:05.142 [2024-11-18 03:17:08.534445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.084 ms 00:17:05.142 [2024-11-18 03:17:08.534456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.534557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.534569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:05.142 [2024-11-18 03:17:08.534579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:05.142 [2024-11-18 03:17:08.534588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.546410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.546456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.142 [2024-11-18 03:17:08.546470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.763 ms 00:17:05.142 [2024-11-18 03:17:08.546486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.546645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.546659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.142 [2024-11-18 03:17:08.546670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:05.142 [2024-11-18 03:17:08.546679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.546706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.546720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.142 [2024-11-18 03:17:08.546729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.142 [2024-11-18 03:17:08.546743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.546768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.142 [2024-11-18 03:17:08.549571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.549852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.142 [2024-11-18 03:17:08.549871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.808 ms 00:17:05.142 [2024-11-18 03:17:08.549880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.142 [2024-11-18 03:17:08.549943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.142 [2024-11-18 03:17:08.549960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.142 [2024-11-18 03:17:08.549973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:05.143 [2024-11-18 03:17:08.549981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-11-18 03:17:08.550003] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:05.143 [2024-11-18 03:17:08.550035] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:05.143 [2024-11-18 03:17:08.550081] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:05.143 [2024-11-18 03:17:08.550103] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:05.143 [2024-11-18 03:17:08.550217] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.143 [2024-11-18 03:17:08.550232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.143 [2024-11-18 03:17:08.550244] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.143 [2024-11-18 03:17:08.550255] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.143 [2024-11-18 03:17:08.550267] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.143 [2024-11-18 03:17:08.550282] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.143 [2024-11-18 03:17:08.550290] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.143 [2024-11-18 03:17:08.550301] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.143 [2024-11-18 03:17:08.550310] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.143 [2024-11-18 03:17:08.550340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-11-18 03:17:08.550352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.143 [2024-11-18 03:17:08.550364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:05.143 [2024-11-18 03:17:08.550377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-11-18 03:17:08.550512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.143 [2024-11-18 03:17:08.550532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.143 [2024-11-18 03:17:08.550547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:05.143 [2024-11-18 03:17:08.550561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.143 [2024-11-18 03:17:08.550738] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.143 [2024-11-18 03:17:08.550769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.143 [2024-11-18 03:17:08.550790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.143 [2024-11-18 03:17:08.550809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.550819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.143 [2024-11-18 03:17:08.550828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.550839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.143 [2024-11-18 03:17:08.550847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.143 [2024-11-18 03:17:08.550859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.143 [2024-11-18 03:17:08.550867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.143 [2024-11-18 03:17:08.550874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.143 [2024-11-18 03:17:08.550884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.143 [2024-11-18 03:17:08.550893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.143 [2024-11-18 03:17:08.550901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.143 [2024-11-18 03:17:08.550910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.143 [2024-11-18 03:17:08.550922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.550931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.143 [2024-11-18 03:17:08.550939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.143 [2024-11-18 03:17:08.550949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.550958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.143 [2024-11-18 03:17:08.550966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.143 [2024-11-18 03:17:08.550976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.143 [2024-11-18 03:17:08.550983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.143 [2024-11-18 03:17:08.550991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.143 [2024-11-18 03:17:08.551009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.143 [2024-11-18 03:17:08.551017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.143 [2024-11-18 03:17:08.551032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.143 [2024-11-18 03:17:08.551038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.143 [2024-11-18 03:17:08.551052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.143 [2024-11-18 03:17:08.551059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.143 [2024-11-18 03:17:08.551073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.143 [2024-11-18 03:17:08.551080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.143 [2024-11-18 03:17:08.551089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.143 [2024-11-18 03:17:08.551097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.143 [2024-11-18 03:17:08.551105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.143 [2024-11-18 03:17:08.551112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.143 [2024-11-18 03:17:08.551130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.143 [2024-11-18 03:17:08.551137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551146] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.143 [2024-11-18 03:17:08.551156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.143 [2024-11-18 03:17:08.551165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.143 [2024-11-18 03:17:08.551173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.143 [2024-11-18 03:17:08.551186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.143 [2024-11-18 03:17:08.551194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.143 [2024-11-18 03:17:08.551202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.143 [2024-11-18 03:17:08.551210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.143 [2024-11-18 03:17:08.551218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.143 [2024-11-18 03:17:08.551226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.143 [2024-11-18 03:17:08.551235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.143 [2024-11-18 03:17:08.551252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.143 [2024-11-18 03:17:08.551276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.143 [2024-11-18 03:17:08.551286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.143 [2024-11-18 03:17:08.551294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.143 [2024-11-18 03:17:08.551303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.143 [2024-11-18 03:17:08.551345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.143 [2024-11-18 03:17:08.551355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.143 [2024-11-18 03:17:08.551363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.143 [2024-11-18 03:17:08.551371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.143 [2024-11-18 03:17:08.551378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.143 [2024-11-18 03:17:08.551420] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.143 [2024-11-18 03:17:08.551429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.143 [2024-11-18 03:17:08.551452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.143 [2024-11-18 03:17:08.551460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.144 [2024-11-18 03:17:08.551471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.144 [2024-11-18 03:17:08.551480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.551489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.144 [2024-11-18 03:17:08.551499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:17:05.144 [2024-11-18 03:17:08.551507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.584777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.584882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.144 [2024-11-18 03:17:08.584915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.204 ms 00:17:05.144 [2024-11-18 03:17:08.584937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.585309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.585390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.144 [2024-11-18 03:17:08.585429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:17:05.144 [2024-11-18 03:17:08.585464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.602197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.602260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.144 [2024-11-18 03:17:08.602277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.673 ms 00:17:05.144 [2024-11-18 03:17:08.602287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.602435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.602453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.144 [2024-11-18 03:17:08.602479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.144 [2024-11-18 03:17:08.602493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.603262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.603335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.144 [2024-11-18 03:17:08.603353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:17:05.144 [2024-11-18 03:17:08.603362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.603544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.603555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.144 [2024-11-18 03:17:08.603564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:05.144 [2024-11-18 03:17:08.603577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.614455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.614522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.144 [2024-11-18 03:17:08.614540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.850 ms 00:17:05.144 [2024-11-18 03:17:08.614561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.619884] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:05.144 [2024-11-18 03:17:08.619942] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:05.144 [2024-11-18 03:17:08.619957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.619966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:05.144 [2024-11-18 03:17:08.619977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.180 ms 00:17:05.144 [2024-11-18 03:17:08.619986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.637068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.637123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:05.144 [2024-11-18 03:17:08.637138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.985 ms 00:17:05.144 [2024-11-18 03:17:08.637146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.640750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.641017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:05.144 [2024-11-18 03:17:08.641038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:17:05.144 [2024-11-18 03:17:08.641048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.644293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.644363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:05.144 [2024-11-18 03:17:08.644386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:17:05.144 [2024-11-18 03:17:08.644394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.644756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.644771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.144 [2024-11-18 03:17:08.644785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:05.144 [2024-11-18 03:17:08.644800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.678485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.678541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:05.144 [2024-11-18 03:17:08.678556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.657 ms 00:17:05.144 [2024-11-18 03:17:08.678565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.686986] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.144 [2024-11-18 03:17:08.712082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.712138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.144 [2024-11-18 03:17:08.712154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.416 ms 00:17:05.144 [2024-11-18 03:17:08.712173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.712280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.712293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.144 [2024-11-18 03:17:08.712304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:05.144 [2024-11-18 03:17:08.712349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.712428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.712441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.144 [2024-11-18 03:17:08.712451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:05.144 [2024-11-18 03:17:08.712461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.712495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.712510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.144 [2024-11-18 03:17:08.712521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:05.144 [2024-11-18 03:17:08.712530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.144 [2024-11-18 03:17:08.712572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.144 [2024-11-18 03:17:08.712588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.144 [2024-11-18 03:17:08.712598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.144 [2024-11-18 03:17:08.712607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:05.144 [2024-11-18 03:17:08.712618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.406 [2024-11-18 03:17:08.720422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.406 [2024-11-18 03:17:08.720480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.406 [2024-11-18 03:17:08.720494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.777 ms 00:17:05.406 [2024-11-18 03:17:08.720504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.406 [2024-11-18 03:17:08.720624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.406 [2024-11-18 03:17:08.720641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.406 [2024-11-18 03:17:08.720652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:05.406 [2024-11-18 03:17:08.720662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.406 [2024-11-18 03:17:08.721965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.407 [2024-11-18 03:17:08.723635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 198.316 ms, result 0 00:17:05.407 [2024-11-18 03:17:08.725066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.407 [2024-11-18 03:17:08.732351] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.670  [2024-11-18T03:17:09.247Z] Copying: 4096/4096 [kB] (average 10088 kBps)[2024-11-18 03:17:09.139531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.670 [2024-11-18 03:17:09.140866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.140922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:05.670 [2024-11-18 03:17:09.140949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.670 [2024-11-18 03:17:09.140958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.140980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:05.670 [2024-11-18 03:17:09.141978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.142026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:05.670 [2024-11-18 03:17:09.142048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:17:05.670 [2024-11-18 03:17:09.142058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.145150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.145205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:05.670 [2024-11-18 03:17:09.145216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:17:05.670 [2024-11-18 03:17:09.145224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.149783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.150046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:05.670 [2024-11-18 03:17:09.150066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.533 ms 00:17:05.670 [2024-11-18 03:17:09.150075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.157251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.157393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:05.670 [2024-11-18 03:17:09.157411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.019 ms 00:17:05.670 [2024-11-18 03:17:09.157420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.160680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.160728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:05.670 [2024-11-18 03:17:09.160739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:17:05.670 [2024-11-18 03:17:09.160761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.167254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.167335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:05.670 [2024-11-18 03:17:09.167358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.441 ms 00:17:05.670 [2024-11-18 03:17:09.167368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.167543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.167557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:05.670 [2024-11-18 03:17:09.167568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:05.670 [2024-11-18 03:17:09.167577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.171529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.171582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:05.670 [2024-11-18 03:17:09.171594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.931 ms 00:17:05.670 [2024-11-18 03:17:09.171602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.174796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.175017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:05.670 [2024-11-18 03:17:09.175037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:17:05.670 [2024-11-18 03:17:09.175045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.177546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.177596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:05.670 [2024-11-18 03:17:09.177606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:17:05.670 [2024-11-18 03:17:09.177615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.180002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.670 [2024-11-18 03:17:09.180054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:05.670 [2024-11-18 03:17:09.180065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:17:05.670 [2024-11-18 03:17:09.180072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.670 [2024-11-18 03:17:09.180119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:05.671 [2024-11-18 03:17:09.180145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:05.671 [2024-11-18 03:17:09.180911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.180992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.181000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:05.672 [2024-11-18 03:17:09.181016] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:05.672 [2024-11-18 03:17:09.181025] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:17:05.672 [2024-11-18 03:17:09.181042] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:05.672 [2024-11-18 03:17:09.181049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:05.672 [2024-11-18 03:17:09.181060] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:05.672 [2024-11-18 03:17:09.181069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:05.672 [2024-11-18 03:17:09.181077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:05.672 [2024-11-18 03:17:09.181086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:05.672 [2024-11-18 03:17:09.181094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:05.672 [2024-11-18 03:17:09.181102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:05.672 [2024-11-18 03:17:09.181109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:05.672 [2024-11-18 03:17:09.181117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.672 [2024-11-18 03:17:09.181125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:05.672 [2024-11-18 03:17:09.181138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:17:05.672 [2024-11-18 03:17:09.181148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.183955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.672 [2024-11-18 03:17:09.183994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:05.672 [2024-11-18 03:17:09.184005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:17:05.672 [2024-11-18 03:17:09.184014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.184154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.672 [2024-11-18 03:17:09.184165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:05.672 [2024-11-18 03:17:09.184175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:05.672 [2024-11-18 03:17:09.184190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.194529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.194591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.672 [2024-11-18 03:17:09.194604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.194614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.194694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.194709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.672 [2024-11-18 03:17:09.194718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.194726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.194775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.194788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.672 [2024-11-18 03:17:09.194797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.194805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.194823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.194834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.672 [2024-11-18 03:17:09.194846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.194854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.213785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.214044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.672 [2024-11-18 03:17:09.214062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.214082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.228817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.672 [2024-11-18 03:17:09.229078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.672 [2024-11-18 03:17:09.229219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.672 [2024-11-18 03:17:09.229291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.672 [2024-11-18 03:17:09.229445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:05.672 [2024-11-18 03:17:09.229513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.672 [2024-11-18 03:17:09.229600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.672 [2024-11-18 03:17:09.229680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.672 [2024-11-18 03:17:09.229690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.672 [2024-11-18 03:17:09.229703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.672 [2024-11-18 03:17:09.229882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.975 ms, result 0 00:17:06.244 00:17:06.244 00:17:06.244 03:17:09 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85887 00:17:06.244 03:17:09 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85887 00:17:06.244 03:17:09 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:06.244 03:17:09 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85887 ']' 00:17:06.244 03:17:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.244 03:17:09 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.244 03:17:09 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.244 03:17:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.244 03:17:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:06.244 [2024-11-18 03:17:09.633371] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:06.244 [2024-11-18 03:17:09.633744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85887 ] 00:17:06.244 [2024-11-18 03:17:09.784828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.505 [2024-11-18 03:17:09.858487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.078 03:17:10 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.078 03:17:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:07.078 03:17:10 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:07.346 [2024-11-18 03:17:10.701298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:07.346 [2024-11-18 03:17:10.701409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:07.346 [2024-11-18 03:17:10.881082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.881154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:07.346 [2024-11-18 03:17:10.881173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:07.346 [2024-11-18 03:17:10.881184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.883991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.884052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.346 [2024-11-18 03:17:10.884064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:17:07.346 [2024-11-18 03:17:10.884078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.884190] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:07.346 [2024-11-18 03:17:10.884508] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:07.346 [2024-11-18 03:17:10.884526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.884537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.346 [2024-11-18 03:17:10.884547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:17:07.346 [2024-11-18 03:17:10.884558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.887128] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:07.346 [2024-11-18 03:17:10.892214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.892273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:07.346 [2024-11-18 03:17:10.892288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.083 ms 00:17:07.346 [2024-11-18 03:17:10.892302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.892420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.892431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:07.346 [2024-11-18 03:17:10.892449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:07.346 [2024-11-18 03:17:10.892458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.904331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.904373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.346 [2024-11-18 03:17:10.904388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.796 ms 00:17:07.346 [2024-11-18 03:17:10.904397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.904535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.904548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.346 [2024-11-18 03:17:10.904561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:07.346 [2024-11-18 03:17:10.904569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.904607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.904616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:07.346 [2024-11-18 03:17:10.904627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:07.346 [2024-11-18 03:17:10.904638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.904672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:07.346 [2024-11-18 03:17:10.907511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.907559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.346 [2024-11-18 03:17:10.907570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:17:07.346 [2024-11-18 03:17:10.907581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.907629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.907640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:07.346 [2024-11-18 03:17:10.907650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:07.346 [2024-11-18 03:17:10.907666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.907689] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:07.346 [2024-11-18 03:17:10.907717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:07.346 [2024-11-18 03:17:10.907765] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:07.346 [2024-11-18 03:17:10.907790] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:07.346 [2024-11-18 03:17:10.907904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:07.346 [2024-11-18 03:17:10.907924] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:07.346 [2024-11-18 03:17:10.907937] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:07.346 [2024-11-18 03:17:10.907954] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:07.346 [2024-11-18 03:17:10.907964] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:07.346 [2024-11-18 03:17:10.907977] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:07.346 [2024-11-18 03:17:10.907988] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:07.346 [2024-11-18 03:17:10.907997] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:07.346 [2024-11-18 03:17:10.908006] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:07.346 [2024-11-18 03:17:10.908016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.908027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:07.346 [2024-11-18 03:17:10.908038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:17:07.346 [2024-11-18 03:17:10.908047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.908144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.346 [2024-11-18 03:17:10.908155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:07.346 [2024-11-18 03:17:10.908167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:07.346 [2024-11-18 03:17:10.908176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.346 [2024-11-18 03:17:10.908286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:07.346 [2024-11-18 03:17:10.908299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:07.346 [2024-11-18 03:17:10.908336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.346 [2024-11-18 03:17:10.908346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.346 [2024-11-18 03:17:10.908360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:07.346 [2024-11-18 03:17:10.908370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:07.346 [2024-11-18 03:17:10.908381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:07.346 [2024-11-18 03:17:10.908392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:07.346 [2024-11-18 03:17:10.908414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:07.346 [2024-11-18 03:17:10.908424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.346 [2024-11-18 03:17:10.908436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:07.346 [2024-11-18 03:17:10.908445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:07.346 [2024-11-18 03:17:10.908456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.346 [2024-11-18 03:17:10.908466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:07.346 [2024-11-18 03:17:10.908477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:07.346 [2024-11-18 03:17:10.908486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.346 [2024-11-18 03:17:10.908496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:07.346 [2024-11-18 03:17:10.908504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:07.347 [2024-11-18 03:17:10.908536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:07.347 [2024-11-18 03:17:10.908560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:07.347 [2024-11-18 03:17:10.908590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:07.347 [2024-11-18 03:17:10.908612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:07.347 [2024-11-18 03:17:10.908639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.347 [2024-11-18 03:17:10.908657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:07.347 [2024-11-18 03:17:10.908663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:07.347 [2024-11-18 03:17:10.908674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.347 [2024-11-18 03:17:10.908682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:07.347 [2024-11-18 03:17:10.908691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:07.347 [2024-11-18 03:17:10.908698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:07.347 [2024-11-18 03:17:10.908714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:07.347 [2024-11-18 03:17:10.908723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908731] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:07.347 [2024-11-18 03:17:10.908742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:07.347 [2024-11-18 03:17:10.908752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.347 [2024-11-18 03:17:10.908770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:07.347 [2024-11-18 03:17:10.908780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:07.347 [2024-11-18 03:17:10.908787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:07.347 [2024-11-18 03:17:10.908796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:07.347 [2024-11-18 03:17:10.908803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:07.347 [2024-11-18 03:17:10.908815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:07.347 [2024-11-18 03:17:10.908828] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:07.347 [2024-11-18 03:17:10.908840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.908849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:07.347 [2024-11-18 03:17:10.908859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:07.347 [2024-11-18 03:17:10.908866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:07.347 [2024-11-18 03:17:10.908876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:07.347 [2024-11-18 03:17:10.908883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:07.347 [2024-11-18 03:17:10.908893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:07.347 [2024-11-18 03:17:10.908900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:07.347 [2024-11-18 03:17:10.908911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:07.347 [2024-11-18 03:17:10.908918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:07.347 [2024-11-18 03:17:10.908927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.908935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.908944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.908952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.908964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:07.347 [2024-11-18 03:17:10.908971] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:07.347 [2024-11-18 03:17:10.908982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.908993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:07.347 [2024-11-18 03:17:10.909002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:07.347 [2024-11-18 03:17:10.909012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:07.347 [2024-11-18 03:17:10.909022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:07.347 [2024-11-18 03:17:10.909031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.347 [2024-11-18 03:17:10.909041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:07.347 [2024-11-18 03:17:10.909054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:17:07.347 [2024-11-18 03:17:10.909065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.930046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.930389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.609 [2024-11-18 03:17:10.930432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.891 ms 00:17:07.609 [2024-11-18 03:17:10.930449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.930651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.930678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:07.609 [2024-11-18 03:17:10.930697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:07.609 [2024-11-18 03:17:10.930712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.947856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.947915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.609 [2024-11-18 03:17:10.947928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.104 ms 00:17:07.609 [2024-11-18 03:17:10.947942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.948019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.948038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.609 [2024-11-18 03:17:10.948049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:07.609 [2024-11-18 03:17:10.948061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.948813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.948844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.609 [2024-11-18 03:17:10.948857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:17:07.609 [2024-11-18 03:17:10.948870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.949027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.949056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.609 [2024-11-18 03:17:10.949071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:07.609 [2024-11-18 03:17:10.949083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.976081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.976151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.609 [2024-11-18 03:17:10.976166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.966 ms 00:17:07.609 [2024-11-18 03:17:10.976177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.981114] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:07.609 [2024-11-18 03:17:10.981179] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:07.609 [2024-11-18 03:17:10.981194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.981206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:07.609 [2024-11-18 03:17:10.981217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.798 ms 00:17:07.609 [2024-11-18 03:17:10.981228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:10.998033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:10.998095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:07.609 [2024-11-18 03:17:10.998115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.690 ms 00:17:07.609 [2024-11-18 03:17:10.998133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.001813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.001878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:07.609 [2024-11-18 03:17:11.001890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:17:07.609 [2024-11-18 03:17:11.001900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.005135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.005398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:07.609 [2024-11-18 03:17:11.005420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:17:07.609 [2024-11-18 03:17:11.005430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.006124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.006188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:07.609 [2024-11-18 03:17:11.006203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:07.609 [2024-11-18 03:17:11.006215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.039919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.039991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:07.609 [2024-11-18 03:17:11.040006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.681 ms 00:17:07.609 [2024-11-18 03:17:11.040021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.048642] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:07.609 [2024-11-18 03:17:11.073480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.073535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.609 [2024-11-18 03:17:11.073552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.348 ms 00:17:07.609 [2024-11-18 03:17:11.073561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.073674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.073686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:07.609 [2024-11-18 03:17:11.073700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:07.609 [2024-11-18 03:17:11.073713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.073787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.073797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:07.609 [2024-11-18 03:17:11.073813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:07.609 [2024-11-18 03:17:11.073821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.073863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.073875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:07.609 [2024-11-18 03:17:11.073892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:07.609 [2024-11-18 03:17:11.073901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.073950] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:07.609 [2024-11-18 03:17:11.073963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.073973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:07.609 [2024-11-18 03:17:11.073982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:07.609 [2024-11-18 03:17:11.073994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.081729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.081801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:07.609 [2024-11-18 03:17:11.081815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.710 ms 00:17:07.609 [2024-11-18 03:17:11.081827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.081935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.609 [2024-11-18 03:17:11.081948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:07.609 [2024-11-18 03:17:11.081961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:07.609 [2024-11-18 03:17:11.081973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.609 [2024-11-18 03:17:11.083405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.609 [2024-11-18 03:17:11.085041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 201.873 ms, result 0 00:17:07.609 [2024-11-18 03:17:11.086927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.609 Some configs were skipped because the RPC state that can call them passed over. 00:17:07.609 03:17:11 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:07.871 [2024-11-18 03:17:11.320708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.871 [2024-11-18 03:17:11.320912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:07.871 [2024-11-18 03:17:11.320993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:17:07.871 [2024-11-18 03:17:11.321019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.871 [2024-11-18 03:17:11.321084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.259 ms, result 0 00:17:07.871 true 00:17:07.871 03:17:11 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:08.164 [2024-11-18 03:17:11.532603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.164 [2024-11-18 03:17:11.532814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:08.164 [2024-11-18 03:17:11.532890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:17:08.164 [2024-11-18 03:17:11.532919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.164 [2024-11-18 03:17:11.532982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.031 ms, result 0 00:17:08.164 true 00:17:08.164 03:17:11 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85887 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85887 ']' 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85887 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85887 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.164 killing process with pid 85887 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85887' 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85887 00:17:08.164 03:17:11 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85887 00:17:08.426 [2024-11-18 03:17:11.767812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.426 [2024-11-18 03:17:11.767883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:08.426 [2024-11-18 03:17:11.767900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:08.426 [2024-11-18 03:17:11.767909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.426 [2024-11-18 03:17:11.767939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:08.426 [2024-11-18 03:17:11.768783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.426 [2024-11-18 03:17:11.768817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:08.426 [2024-11-18 03:17:11.768830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:17:08.426 [2024-11-18 03:17:11.768842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.426 [2024-11-18 03:17:11.769154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.426 [2024-11-18 03:17:11.769179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:08.426 [2024-11-18 03:17:11.769189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:08.426 [2024-11-18 03:17:11.769200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.426 [2024-11-18 03:17:11.773922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.426 [2024-11-18 03:17:11.773978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:08.426 [2024-11-18 03:17:11.773989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:17:08.426 [2024-11-18 03:17:11.774000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.426 [2024-11-18 03:17:11.781045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.426 [2024-11-18 03:17:11.781094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:08.426 [2024-11-18 03:17:11.781105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.999 ms 00:17:08.426 [2024-11-18 03:17:11.781117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.426 [2024-11-18 03:17:11.783954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.426 [2024-11-18 03:17:11.784006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:08.427 [2024-11-18 03:17:11.784017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:17:08.427 [2024-11-18 03:17:11.784026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.790110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-11-18 03:17:11.790165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:08.427 [2024-11-18 03:17:11.790177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.033 ms 00:17:08.427 [2024-11-18 03:17:11.790188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.790367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-11-18 03:17:11.790383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:08.427 [2024-11-18 03:17:11.790393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:08.427 [2024-11-18 03:17:11.790418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.794035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-11-18 03:17:11.794089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:08.427 [2024-11-18 03:17:11.794100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:17:08.427 [2024-11-18 03:17:11.794117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.797020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-11-18 03:17:11.797256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:08.427 [2024-11-18 03:17:11.797274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:17:08.427 [2024-11-18 03:17:11.797284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.799736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-11-18 03:17:11.799790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:08.427 [2024-11-18 03:17:11.799801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:17:08.427 [2024-11-18 03:17:11.799811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.802075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-11-18 03:17:11.802127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:08.427 [2024-11-18 03:17:11.802138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:17:08.427 [2024-11-18 03:17:11.802149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-11-18 03:17:11.802195] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:08.427 [2024-11-18 03:17:11.802217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.802990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.803000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.803010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.803021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.803029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.803040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:08.427 [2024-11-18 03:17:11.803050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:08.428 [2024-11-18 03:17:11.803420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:08.428 [2024-11-18 03:17:11.803429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:17:08.428 [2024-11-18 03:17:11.803470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:08.428 [2024-11-18 03:17:11.803481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:08.428 [2024-11-18 03:17:11.803492] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:08.428 [2024-11-18 03:17:11.803504] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:08.428 [2024-11-18 03:17:11.803519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:08.428 [2024-11-18 03:17:11.803528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:08.428 [2024-11-18 03:17:11.803542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:08.428 [2024-11-18 03:17:11.803551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:08.428 [2024-11-18 03:17:11.803562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:08.428 [2024-11-18 03:17:11.803570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.428 [2024-11-18 03:17:11.803581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:08.428 [2024-11-18 03:17:11.803591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:17:08.428 [2024-11-18 03:17:11.803604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.806235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.428 [2024-11-18 03:17:11.806279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:08.428 [2024-11-18 03:17:11.806290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:17:08.428 [2024-11-18 03:17:11.806301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.806510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.428 [2024-11-18 03:17:11.806534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:08.428 [2024-11-18 03:17:11.806551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:17:08.428 [2024-11-18 03:17:11.806568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.816858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.816908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.428 [2024-11-18 03:17:11.816919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.816931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.817027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.817041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.428 [2024-11-18 03:17:11.817051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.817065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.817120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.817135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.428 [2024-11-18 03:17:11.817144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.817155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.817176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.817189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.428 [2024-11-18 03:17:11.817198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.817208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.836006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.836074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.428 [2024-11-18 03:17:11.836087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.836098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.850825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.851120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.428 [2024-11-18 03:17:11.851142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.851157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.851235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.851249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.428 [2024-11-18 03:17:11.851258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.851273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.851336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.851349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.428 [2024-11-18 03:17:11.851359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.851370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.851478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.851494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.428 [2024-11-18 03:17:11.851505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.851519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.851557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.851570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:08.428 [2024-11-18 03:17:11.851582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.851598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.428 [2024-11-18 03:17:11.851656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.428 [2024-11-18 03:17:11.851671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.428 [2024-11-18 03:17:11.851683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.428 [2024-11-18 03:17:11.851695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.429 [2024-11-18 03:17:11.851758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.429 [2024-11-18 03:17:11.851777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.429 [2024-11-18 03:17:11.851788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.429 [2024-11-18 03:17:11.851800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.429 [2024-11-18 03:17:11.851987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.134 ms, result 0 00:17:08.688 03:17:12 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:08.948 [2024-11-18 03:17:12.265291] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:08.948 [2024-11-18 03:17:12.265473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85929 ] 00:17:08.948 [2024-11-18 03:17:12.412251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.948 [2024-11-18 03:17:12.459681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.210 [2024-11-18 03:17:12.559636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.210 [2024-11-18 03:17:12.559695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.210 [2024-11-18 03:17:12.723577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.723626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:09.210 [2024-11-18 03:17:12.723645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.210 [2024-11-18 03:17:12.723654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.726114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.726156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.210 [2024-11-18 03:17:12.726169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:17:09.210 [2024-11-18 03:17:12.726177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.726258] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:09.210 [2024-11-18 03:17:12.726547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:09.210 [2024-11-18 03:17:12.726566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.726576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.210 [2024-11-18 03:17:12.726588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:09.210 [2024-11-18 03:17:12.726595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.728634] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:09.210 [2024-11-18 03:17:12.732356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.732400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:09.210 [2024-11-18 03:17:12.732416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:17:09.210 [2024-11-18 03:17:12.732430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.732507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.732517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:09.210 [2024-11-18 03:17:12.732527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:09.210 [2024-11-18 03:17:12.732535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.741295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.741356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.210 [2024-11-18 03:17:12.741367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.713 ms 00:17:09.210 [2024-11-18 03:17:12.741375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.741508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.741545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.210 [2024-11-18 03:17:12.741555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:09.210 [2024-11-18 03:17:12.741562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.741589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.741604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:09.210 [2024-11-18 03:17:12.741618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:09.210 [2024-11-18 03:17:12.741626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.741649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:09.210 [2024-11-18 03:17:12.743853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.743886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.210 [2024-11-18 03:17:12.743896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:17:09.210 [2024-11-18 03:17:12.743904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.743952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.743966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:09.210 [2024-11-18 03:17:12.743976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:09.210 [2024-11-18 03:17:12.743984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.744002] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:09.210 [2024-11-18 03:17:12.744033] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:09.210 [2024-11-18 03:17:12.744072] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:09.210 [2024-11-18 03:17:12.744087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:09.210 [2024-11-18 03:17:12.744197] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:09.210 [2024-11-18 03:17:12.744210] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:09.210 [2024-11-18 03:17:12.744222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:09.210 [2024-11-18 03:17:12.744233] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:09.210 [2024-11-18 03:17:12.744244] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:09.210 [2024-11-18 03:17:12.744253] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:09.210 [2024-11-18 03:17:12.744262] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:09.210 [2024-11-18 03:17:12.744270] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:09.210 [2024-11-18 03:17:12.744281] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:09.210 [2024-11-18 03:17:12.744292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.744301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:09.210 [2024-11-18 03:17:12.744331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:09.210 [2024-11-18 03:17:12.744343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.744431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.210 [2024-11-18 03:17:12.744443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:09.210 [2024-11-18 03:17:12.744451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:09.210 [2024-11-18 03:17:12.744459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.210 [2024-11-18 03:17:12.744562] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:09.210 [2024-11-18 03:17:12.744581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:09.210 [2024-11-18 03:17:12.744595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.210 [2024-11-18 03:17:12.744612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.210 [2024-11-18 03:17:12.744621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:09.210 [2024-11-18 03:17:12.744631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:09.210 [2024-11-18 03:17:12.744640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:09.210 [2024-11-18 03:17:12.744648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:09.211 [2024-11-18 03:17:12.744659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.211 [2024-11-18 03:17:12.744676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:09.211 [2024-11-18 03:17:12.744684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:09.211 [2024-11-18 03:17:12.744692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.211 [2024-11-18 03:17:12.744701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:09.211 [2024-11-18 03:17:12.744710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:09.211 [2024-11-18 03:17:12.744719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:09.211 [2024-11-18 03:17:12.744736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:09.211 [2024-11-18 03:17:12.744745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:09.211 [2024-11-18 03:17:12.744764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.211 [2024-11-18 03:17:12.744780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:09.211 [2024-11-18 03:17:12.744788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.211 [2024-11-18 03:17:12.744811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:09.211 [2024-11-18 03:17:12.744819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.211 [2024-11-18 03:17:12.744836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:09.211 [2024-11-18 03:17:12.744843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.211 [2024-11-18 03:17:12.744861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:09.211 [2024-11-18 03:17:12.744869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.211 [2024-11-18 03:17:12.744885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:09.211 [2024-11-18 03:17:12.744893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:09.211 [2024-11-18 03:17:12.744900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.211 [2024-11-18 03:17:12.744908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:09.211 [2024-11-18 03:17:12.744915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:09.211 [2024-11-18 03:17:12.744923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:09.211 [2024-11-18 03:17:12.744940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:09.211 [2024-11-18 03:17:12.744946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744955] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:09.211 [2024-11-18 03:17:12.744964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:09.211 [2024-11-18 03:17:12.744971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.211 [2024-11-18 03:17:12.744978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.211 [2024-11-18 03:17:12.744992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:09.211 [2024-11-18 03:17:12.745000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:09.211 [2024-11-18 03:17:12.745009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:09.211 [2024-11-18 03:17:12.745017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:09.211 [2024-11-18 03:17:12.745024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:09.211 [2024-11-18 03:17:12.745031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:09.211 [2024-11-18 03:17:12.745040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:09.211 [2024-11-18 03:17:12.745051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:09.211 [2024-11-18 03:17:12.745071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:09.211 [2024-11-18 03:17:12.745080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:09.211 [2024-11-18 03:17:12.745088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:09.211 [2024-11-18 03:17:12.745096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:09.211 [2024-11-18 03:17:12.745103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:09.211 [2024-11-18 03:17:12.745111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:09.211 [2024-11-18 03:17:12.745119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:09.211 [2024-11-18 03:17:12.745129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:09.211 [2024-11-18 03:17:12.745137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:09.211 [2024-11-18 03:17:12.745178] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:09.211 [2024-11-18 03:17:12.745189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:09.211 [2024-11-18 03:17:12.745207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:09.211 [2024-11-18 03:17:12.745215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:09.211 [2024-11-18 03:17:12.745222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:09.211 [2024-11-18 03:17:12.745231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.211 [2024-11-18 03:17:12.745240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:09.211 [2024-11-18 03:17:12.745250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:09.211 [2024-11-18 03:17:12.745257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.211 [2024-11-18 03:17:12.771547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.211 [2024-11-18 03:17:12.771601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.211 [2024-11-18 03:17:12.771617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.236 ms 00:17:09.211 [2024-11-18 03:17:12.771626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.211 [2024-11-18 03:17:12.771793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.211 [2024-11-18 03:17:12.771809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:09.211 [2024-11-18 03:17:12.771819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:09.211 [2024-11-18 03:17:12.771834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.472 [2024-11-18 03:17:12.786626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.472 [2024-11-18 03:17:12.786670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.472 [2024-11-18 03:17:12.786684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.761 ms 00:17:09.472 [2024-11-18 03:17:12.786693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.472 [2024-11-18 03:17:12.786764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.472 [2024-11-18 03:17:12.786775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.472 [2024-11-18 03:17:12.786788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:09.472 [2024-11-18 03:17:12.786796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.472 [2024-11-18 03:17:12.787469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.472 [2024-11-18 03:17:12.787499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.472 [2024-11-18 03:17:12.787512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:17:09.472 [2024-11-18 03:17:12.787522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.472 [2024-11-18 03:17:12.787692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.472 [2024-11-18 03:17:12.787702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.472 [2024-11-18 03:17:12.787712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:09.472 [2024-11-18 03:17:12.787727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.472 [2024-11-18 03:17:12.797612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.797662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.473 [2024-11-18 03:17:12.797674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.859 ms 00:17:09.473 [2024-11-18 03:17:12.797685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.802380] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:09.473 [2024-11-18 03:17:12.802450] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:09.473 [2024-11-18 03:17:12.802465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.802474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:09.473 [2024-11-18 03:17:12.802486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:17:09.473 [2024-11-18 03:17:12.802495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.819290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.819348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:09.473 [2024-11-18 03:17:12.819362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:17:09.473 [2024-11-18 03:17:12.819380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.822623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.822674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:09.473 [2024-11-18 03:17:12.822685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:17:09.473 [2024-11-18 03:17:12.822692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.825460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.825518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:09.473 [2024-11-18 03:17:12.825529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:17:09.473 [2024-11-18 03:17:12.825536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.825915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.825931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.473 [2024-11-18 03:17:12.825944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:09.473 [2024-11-18 03:17:12.825952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.858697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.858755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:09.473 [2024-11-18 03:17:12.858768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.722 ms 00:17:09.473 [2024-11-18 03:17:12.858779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.867951] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:09.473 [2024-11-18 03:17:12.893116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.893437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.473 [2024-11-18 03:17:12.893459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.227 ms 00:17:09.473 [2024-11-18 03:17:12.893470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.893596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.893611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:09.473 [2024-11-18 03:17:12.893622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:09.473 [2024-11-18 03:17:12.893631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.893702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.893713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.473 [2024-11-18 03:17:12.893727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:09.473 [2024-11-18 03:17:12.893737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.893761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.893774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.473 [2024-11-18 03:17:12.893783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:09.473 [2024-11-18 03:17:12.893792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.893840] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:09.473 [2024-11-18 03:17:12.893855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.893864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:09.473 [2024-11-18 03:17:12.893876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:09.473 [2024-11-18 03:17:12.893889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.900888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.901087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.473 [2024-11-18 03:17:12.901109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:17:09.473 [2024-11-18 03:17:12.901119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.901565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.473 [2024-11-18 03:17:12.901606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.473 [2024-11-18 03:17:12.901627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:09.473 [2024-11-18 03:17:12.901636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.473 [2024-11-18 03:17:12.902952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.473 [2024-11-18 03:17:12.904482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 178.972 ms, result 0 00:17:09.473 [2024-11-18 03:17:12.905798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.473 [2024-11-18 03:17:12.913185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.414  [2024-11-18T03:17:15.364Z] Copying: 13/256 [MB] (13 MBps) [2024-11-18T03:17:16.305Z] Copying: 25/256 [MB] (11 MBps) [2024-11-18T03:17:17.240Z] Copying: 47/256 [MB] (22 MBps) [2024-11-18T03:17:18.177Z] Copying: 58/256 [MB] (10 MBps) [2024-11-18T03:17:19.115Z] Copying: 72/256 [MB] (13 MBps) [2024-11-18T03:17:20.064Z] Copying: 84/256 [MB] (12 MBps) [2024-11-18T03:17:21.003Z] Copying: 99/256 [MB] (14 MBps) [2024-11-18T03:17:22.387Z] Copying: 114/256 [MB] (15 MBps) [2024-11-18T03:17:23.327Z] Copying: 147/256 [MB] (32 MBps) [2024-11-18T03:17:24.266Z] Copying: 173/256 [MB] (25 MBps) [2024-11-18T03:17:25.207Z] Copying: 196/256 [MB] (23 MBps) [2024-11-18T03:17:26.150Z] Copying: 219/256 [MB] (22 MBps) [2024-11-18T03:17:26.722Z] Copying: 244/256 [MB] (24 MBps) [2024-11-18T03:17:26.722Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-18 03:17:26.714291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.145 [2024-11-18 03:17:26.717044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.145 [2024-11-18 03:17:26.717117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.145 [2024-11-18 03:17:26.717149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:23.145 [2024-11-18 03:17:26.717164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.145 [2024-11-18 03:17:26.717212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.145 [2024-11-18 03:17:26.718780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.718987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.409 [2024-11-18 03:17:26.719016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:17:23.409 [2024-11-18 03:17:26.719030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.719416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.719433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.409 [2024-11-18 03:17:26.719445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:17:23.409 [2024-11-18 03:17:26.719456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.724252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.724279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.409 [2024-11-18 03:17:26.724292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.770 ms 00:17:23.409 [2024-11-18 03:17:26.724304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.732837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.732882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.409 [2024-11-18 03:17:26.732894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.476 ms 00:17:23.409 [2024-11-18 03:17:26.732902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.736040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.736225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.409 [2024-11-18 03:17:26.736244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:17:23.409 [2024-11-18 03:17:26.736264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.741504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.741574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.409 [2024-11-18 03:17:26.741590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.047 ms 00:17:23.409 [2024-11-18 03:17:26.741598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.741737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.741748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.409 [2024-11-18 03:17:26.741757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:23.409 [2024-11-18 03:17:26.741766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.745259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.745324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.409 [2024-11-18 03:17:26.745335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.474 ms 00:17:23.409 [2024-11-18 03:17:26.745343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.748068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.748117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.409 [2024-11-18 03:17:26.748128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:17:23.409 [2024-11-18 03:17:26.748135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.750495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.750668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.409 [2024-11-18 03:17:26.750687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.316 ms 00:17:23.409 [2024-11-18 03:17:26.750696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.753018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.409 [2024-11-18 03:17:26.753070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.409 [2024-11-18 03:17:26.753081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:17:23.409 [2024-11-18 03:17:26.753088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.409 [2024-11-18 03:17:26.753133] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.409 [2024-11-18 03:17:26.753158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.409 [2024-11-18 03:17:26.753562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-11-18 03:17:26.753985] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.410 [2024-11-18 03:17:26.753993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e65d81cb-0fb4-4410-9e48-4b07019a2d5e 00:17:23.410 [2024-11-18 03:17:26.754011] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.410 [2024-11-18 03:17:26.754019] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.410 [2024-11-18 03:17:26.754027] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.410 [2024-11-18 03:17:26.754035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.410 [2024-11-18 03:17:26.754043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.410 [2024-11-18 03:17:26.754052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.410 [2024-11-18 03:17:26.754060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.410 [2024-11-18 03:17:26.754067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.410 [2024-11-18 03:17:26.754074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.410 [2024-11-18 03:17:26.754081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-11-18 03:17:26.754090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.410 [2024-11-18 03:17:26.754102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:17:23.410 [2024-11-18 03:17:26.754109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.756922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-11-18 03:17:26.757070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.410 [2024-11-18 03:17:26.757128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:17:23.410 [2024-11-18 03:17:26.757153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.757327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-11-18 03:17:26.757509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.410 [2024-11-18 03:17:26.757538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:23.410 [2024-11-18 03:17:26.757557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.765155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.410 [2024-11-18 03:17:26.765339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.410 [2024-11-18 03:17:26.765404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.410 [2024-11-18 03:17:26.765429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.765521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.410 [2024-11-18 03:17:26.765552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.410 [2024-11-18 03:17:26.765578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.410 [2024-11-18 03:17:26.765633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.765702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.410 [2024-11-18 03:17:26.765726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.410 [2024-11-18 03:17:26.765747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.410 [2024-11-18 03:17:26.765767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.765803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.410 [2024-11-18 03:17:26.765892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.410 [2024-11-18 03:17:26.765924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.410 [2024-11-18 03:17:26.765944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.779668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.410 [2024-11-18 03:17:26.779858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.410 [2024-11-18 03:17:26.779915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.410 [2024-11-18 03:17:26.779939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-11-18 03:17:26.790116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.410 [2024-11-18 03:17:26.790297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.410 [2024-11-18 03:17:26.790338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.410 [2024-11-18 03:17:26.790348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-11-18 03:17:26.790408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.411 [2024-11-18 03:17:26.790417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-11-18 03:17:26.790425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-11-18 03:17:26.790488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.411 [2024-11-18 03:17:26.790498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-11-18 03:17:26.790512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-11-18 03:17:26.790603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.411 [2024-11-18 03:17:26.790611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-11-18 03:17:26.790619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-11-18 03:17:26.790668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.411 [2024-11-18 03:17:26.790676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-11-18 03:17:26.790684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-11-18 03:17:26.790737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.411 [2024-11-18 03:17:26.790746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-11-18 03:17:26.790753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-11-18 03:17:26.790811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.411 [2024-11-18 03:17:26.790819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-11-18 03:17:26.790831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-11-18 03:17:26.790980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.920 ms, result 0 00:17:23.671 00:17:23.671 00:17:23.671 03:17:27 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:24.244 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:24.244 03:17:27 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85887 00:17:24.244 03:17:27 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85887 ']' 00:17:24.244 03:17:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85887 00:17:24.244 Process with pid 85887 is not found 00:17:24.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85887) - No such process 00:17:24.244 03:17:27 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85887 is not found' 00:17:24.244 00:17:24.244 real 1m9.345s 00:17:24.244 user 1m31.921s 00:17:24.244 sys 0m5.624s 00:17:24.244 03:17:27 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.244 ************************************ 00:17:24.244 END TEST ftl_trim 00:17:24.244 ************************************ 00:17:24.244 03:17:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:24.244 03:17:27 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:24.244 03:17:27 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:24.244 03:17:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.244 03:17:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:24.244 ************************************ 00:17:24.244 START TEST ftl_restore 00:17:24.244 ************************************ 00:17:24.244 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:24.244 * Looking for test storage... 00:17:24.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.244 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:24.244 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:24.244 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:24.506 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.506 03:17:27 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:24.506 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.506 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:24.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.507 --rc genhtml_branch_coverage=1 00:17:24.507 --rc genhtml_function_coverage=1 00:17:24.507 --rc genhtml_legend=1 00:17:24.507 --rc geninfo_all_blocks=1 00:17:24.507 --rc geninfo_unexecuted_blocks=1 00:17:24.507 00:17:24.507 ' 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:24.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.507 --rc genhtml_branch_coverage=1 00:17:24.507 --rc genhtml_function_coverage=1 00:17:24.507 --rc genhtml_legend=1 00:17:24.507 --rc geninfo_all_blocks=1 00:17:24.507 --rc geninfo_unexecuted_blocks=1 00:17:24.507 00:17:24.507 ' 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:24.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.507 --rc genhtml_branch_coverage=1 00:17:24.507 --rc genhtml_function_coverage=1 00:17:24.507 --rc genhtml_legend=1 00:17:24.507 --rc geninfo_all_blocks=1 00:17:24.507 --rc geninfo_unexecuted_blocks=1 00:17:24.507 00:17:24.507 ' 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:24.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.507 --rc genhtml_branch_coverage=1 00:17:24.507 --rc genhtml_function_coverage=1 00:17:24.507 --rc genhtml_legend=1 00:17:24.507 --rc geninfo_all_blocks=1 00:17:24.507 --rc geninfo_unexecuted_blocks=1 00:17:24.507 00:17:24.507 ' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:24.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.fWNNg3i4kR 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86166 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86166 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86166 ']' 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.507 03:17:27 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.507 03:17:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:24.507 [2024-11-18 03:17:27.977238] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:24.507 [2024-11-18 03:17:27.977374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86166 ] 00:17:24.769 [2024-11-18 03:17:28.122541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.769 [2024-11-18 03:17:28.161258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.342 03:17:28 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.342 03:17:28 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:25.342 03:17:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:25.342 03:17:28 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:25.342 03:17:28 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:25.342 03:17:28 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:25.342 03:17:28 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:25.342 03:17:28 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:25.604 03:17:29 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:25.604 03:17:29 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:25.604 03:17:29 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:25.604 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:25.604 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:25.604 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:25.604 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:25.604 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:25.865 { 00:17:25.865 "name": "nvme0n1", 00:17:25.865 "aliases": [ 00:17:25.865 "6333c862-6722-4d73-ade8-9627ce065d83" 00:17:25.865 ], 00:17:25.865 "product_name": "NVMe disk", 00:17:25.865 "block_size": 4096, 00:17:25.865 "num_blocks": 1310720, 00:17:25.865 "uuid": "6333c862-6722-4d73-ade8-9627ce065d83", 00:17:25.865 "numa_id": -1, 00:17:25.865 "assigned_rate_limits": { 00:17:25.865 "rw_ios_per_sec": 0, 00:17:25.865 "rw_mbytes_per_sec": 0, 00:17:25.865 "r_mbytes_per_sec": 0, 00:17:25.865 "w_mbytes_per_sec": 0 00:17:25.865 }, 00:17:25.865 "claimed": true, 00:17:25.865 "claim_type": "read_many_write_one", 00:17:25.865 "zoned": false, 00:17:25.865 "supported_io_types": { 00:17:25.865 "read": true, 00:17:25.865 "write": true, 00:17:25.865 "unmap": true, 00:17:25.865 "flush": true, 00:17:25.865 "reset": true, 00:17:25.865 "nvme_admin": true, 00:17:25.865 "nvme_io": true, 00:17:25.865 "nvme_io_md": false, 00:17:25.865 "write_zeroes": true, 00:17:25.865 "zcopy": false, 00:17:25.865 "get_zone_info": false, 00:17:25.865 "zone_management": false, 00:17:25.865 "zone_append": false, 00:17:25.865 "compare": true, 00:17:25.865 "compare_and_write": false, 00:17:25.865 "abort": true, 00:17:25.865 "seek_hole": false, 00:17:25.865 "seek_data": false, 00:17:25.865 "copy": true, 00:17:25.865 "nvme_iov_md": false 00:17:25.865 }, 00:17:25.865 "driver_specific": { 00:17:25.865 "nvme": [ 00:17:25.865 { 00:17:25.865 "pci_address": "0000:00:11.0", 00:17:25.865 "trid": { 00:17:25.865 "trtype": "PCIe", 00:17:25.865 "traddr": "0000:00:11.0" 00:17:25.865 }, 00:17:25.865 "ctrlr_data": { 00:17:25.865 "cntlid": 0, 00:17:25.865 "vendor_id": "0x1b36", 00:17:25.865 "model_number": "QEMU NVMe Ctrl", 00:17:25.865 "serial_number": "12341", 00:17:25.865 "firmware_revision": "8.0.0", 00:17:25.865 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:25.865 "oacs": { 00:17:25.865 "security": 0, 00:17:25.865 "format": 1, 00:17:25.865 "firmware": 0, 00:17:25.865 "ns_manage": 1 00:17:25.865 }, 00:17:25.865 "multi_ctrlr": false, 00:17:25.865 "ana_reporting": false 00:17:25.865 }, 00:17:25.865 "vs": { 00:17:25.865 "nvme_version": "1.4" 00:17:25.865 }, 00:17:25.865 "ns_data": { 00:17:25.865 "id": 1, 00:17:25.865 "can_share": false 00:17:25.865 } 00:17:25.865 } 00:17:25.865 ], 00:17:25.865 "mp_policy": "active_passive" 00:17:25.865 } 00:17:25.865 } 00:17:25.865 ]' 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:25.865 03:17:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:25.865 03:17:29 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:25.865 03:17:29 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:25.865 03:17:29 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:25.865 03:17:29 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:25.865 03:17:29 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:26.126 03:17:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8 00:17:26.126 03:17:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:26.126 03:17:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6095d65a-8a0a-429c-8ac0-36f0cd7fd9e8 00:17:26.387 03:17:29 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:26.648 03:17:30 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=90dfaef5-0004-49f7-b5bf-fe18413ee04a 00:17:26.648 03:17:30 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 90dfaef5-0004-49f7-b5bf-fe18413ee04a 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=97999670-48e0-48fe-9472-58957094d92b 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 97999670-48e0-48fe-9472-58957094d92b 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=97999670-48e0-48fe-9472-58957094d92b 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:26.911 03:17:30 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 97999670-48e0-48fe-9472-58957094d92b 00:17:26.911 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=97999670-48e0-48fe-9472-58957094d92b 00:17:26.911 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:26.911 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:26.911 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:26.911 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 97999670-48e0-48fe-9472-58957094d92b 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:27.173 { 00:17:27.173 "name": "97999670-48e0-48fe-9472-58957094d92b", 00:17:27.173 "aliases": [ 00:17:27.173 "lvs/nvme0n1p0" 00:17:27.173 ], 00:17:27.173 "product_name": "Logical Volume", 00:17:27.173 "block_size": 4096, 00:17:27.173 "num_blocks": 26476544, 00:17:27.173 "uuid": "97999670-48e0-48fe-9472-58957094d92b", 00:17:27.173 "assigned_rate_limits": { 00:17:27.173 "rw_ios_per_sec": 0, 00:17:27.173 "rw_mbytes_per_sec": 0, 00:17:27.173 "r_mbytes_per_sec": 0, 00:17:27.173 "w_mbytes_per_sec": 0 00:17:27.173 }, 00:17:27.173 "claimed": false, 00:17:27.173 "zoned": false, 00:17:27.173 "supported_io_types": { 00:17:27.173 "read": true, 00:17:27.173 "write": true, 00:17:27.173 "unmap": true, 00:17:27.173 "flush": false, 00:17:27.173 "reset": true, 00:17:27.173 "nvme_admin": false, 00:17:27.173 "nvme_io": false, 00:17:27.173 "nvme_io_md": false, 00:17:27.173 "write_zeroes": true, 00:17:27.173 "zcopy": false, 00:17:27.173 "get_zone_info": false, 00:17:27.173 "zone_management": false, 00:17:27.173 "zone_append": false, 00:17:27.173 "compare": false, 00:17:27.173 "compare_and_write": false, 00:17:27.173 "abort": false, 00:17:27.173 "seek_hole": true, 00:17:27.173 "seek_data": true, 00:17:27.173 "copy": false, 00:17:27.173 "nvme_iov_md": false 00:17:27.173 }, 00:17:27.173 "driver_specific": { 00:17:27.173 "lvol": { 00:17:27.173 "lvol_store_uuid": "90dfaef5-0004-49f7-b5bf-fe18413ee04a", 00:17:27.173 "base_bdev": "nvme0n1", 00:17:27.173 "thin_provision": true, 00:17:27.173 "num_allocated_clusters": 0, 00:17:27.173 "snapshot": false, 00:17:27.173 "clone": false, 00:17:27.173 "esnap_clone": false 00:17:27.173 } 00:17:27.173 } 00:17:27.173 } 00:17:27.173 ]' 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:27.173 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:27.173 03:17:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:27.173 03:17:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:27.173 03:17:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:27.434 03:17:30 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:27.434 03:17:30 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:27.434 03:17:30 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 97999670-48e0-48fe-9472-58957094d92b 00:17:27.434 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=97999670-48e0-48fe-9472-58957094d92b 00:17:27.434 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:27.434 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:27.434 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:27.434 03:17:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 97999670-48e0-48fe-9472-58957094d92b 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:27.695 { 00:17:27.695 "name": "97999670-48e0-48fe-9472-58957094d92b", 00:17:27.695 "aliases": [ 00:17:27.695 "lvs/nvme0n1p0" 00:17:27.695 ], 00:17:27.695 "product_name": "Logical Volume", 00:17:27.695 "block_size": 4096, 00:17:27.695 "num_blocks": 26476544, 00:17:27.695 "uuid": "97999670-48e0-48fe-9472-58957094d92b", 00:17:27.695 "assigned_rate_limits": { 00:17:27.695 "rw_ios_per_sec": 0, 00:17:27.695 "rw_mbytes_per_sec": 0, 00:17:27.695 "r_mbytes_per_sec": 0, 00:17:27.695 "w_mbytes_per_sec": 0 00:17:27.695 }, 00:17:27.695 "claimed": false, 00:17:27.695 "zoned": false, 00:17:27.695 "supported_io_types": { 00:17:27.695 "read": true, 00:17:27.695 "write": true, 00:17:27.695 "unmap": true, 00:17:27.695 "flush": false, 00:17:27.695 "reset": true, 00:17:27.695 "nvme_admin": false, 00:17:27.695 "nvme_io": false, 00:17:27.695 "nvme_io_md": false, 00:17:27.695 "write_zeroes": true, 00:17:27.695 "zcopy": false, 00:17:27.695 "get_zone_info": false, 00:17:27.695 "zone_management": false, 00:17:27.695 "zone_append": false, 00:17:27.695 "compare": false, 00:17:27.695 "compare_and_write": false, 00:17:27.695 "abort": false, 00:17:27.695 "seek_hole": true, 00:17:27.695 "seek_data": true, 00:17:27.695 "copy": false, 00:17:27.695 "nvme_iov_md": false 00:17:27.695 }, 00:17:27.695 "driver_specific": { 00:17:27.695 "lvol": { 00:17:27.695 "lvol_store_uuid": "90dfaef5-0004-49f7-b5bf-fe18413ee04a", 00:17:27.695 "base_bdev": "nvme0n1", 00:17:27.695 "thin_provision": true, 00:17:27.695 "num_allocated_clusters": 0, 00:17:27.695 "snapshot": false, 00:17:27.695 "clone": false, 00:17:27.695 "esnap_clone": false 00:17:27.695 } 00:17:27.695 } 00:17:27.695 } 00:17:27.695 ]' 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:27.695 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:27.695 03:17:31 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:27.695 03:17:31 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:27.957 03:17:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:27.957 03:17:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 97999670-48e0-48fe-9472-58957094d92b 00:17:27.957 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=97999670-48e0-48fe-9472-58957094d92b 00:17:27.957 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:27.957 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:27.957 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:27.957 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 97999670-48e0-48fe-9472-58957094d92b 00:17:28.218 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:28.218 { 00:17:28.218 "name": "97999670-48e0-48fe-9472-58957094d92b", 00:17:28.218 "aliases": [ 00:17:28.218 "lvs/nvme0n1p0" 00:17:28.218 ], 00:17:28.218 "product_name": "Logical Volume", 00:17:28.218 "block_size": 4096, 00:17:28.218 "num_blocks": 26476544, 00:17:28.218 "uuid": "97999670-48e0-48fe-9472-58957094d92b", 00:17:28.218 "assigned_rate_limits": { 00:17:28.218 "rw_ios_per_sec": 0, 00:17:28.218 "rw_mbytes_per_sec": 0, 00:17:28.218 "r_mbytes_per_sec": 0, 00:17:28.218 "w_mbytes_per_sec": 0 00:17:28.218 }, 00:17:28.218 "claimed": false, 00:17:28.218 "zoned": false, 00:17:28.218 "supported_io_types": { 00:17:28.218 "read": true, 00:17:28.218 "write": true, 00:17:28.218 "unmap": true, 00:17:28.218 "flush": false, 00:17:28.218 "reset": true, 00:17:28.218 "nvme_admin": false, 00:17:28.218 "nvme_io": false, 00:17:28.218 "nvme_io_md": false, 00:17:28.218 "write_zeroes": true, 00:17:28.218 "zcopy": false, 00:17:28.218 "get_zone_info": false, 00:17:28.218 "zone_management": false, 00:17:28.218 "zone_append": false, 00:17:28.218 "compare": false, 00:17:28.218 "compare_and_write": false, 00:17:28.218 "abort": false, 00:17:28.218 "seek_hole": true, 00:17:28.218 "seek_data": true, 00:17:28.218 "copy": false, 00:17:28.218 "nvme_iov_md": false 00:17:28.218 }, 00:17:28.218 "driver_specific": { 00:17:28.219 "lvol": { 00:17:28.219 "lvol_store_uuid": "90dfaef5-0004-49f7-b5bf-fe18413ee04a", 00:17:28.219 "base_bdev": "nvme0n1", 00:17:28.219 "thin_provision": true, 00:17:28.219 "num_allocated_clusters": 0, 00:17:28.219 "snapshot": false, 00:17:28.219 "clone": false, 00:17:28.219 "esnap_clone": false 00:17:28.219 } 00:17:28.219 } 00:17:28.219 } 00:17:28.219 ]' 00:17:28.219 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:28.219 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:28.219 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:28.219 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:28.219 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:28.219 03:17:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 97999670-48e0-48fe-9472-58957094d92b --l2p_dram_limit 10' 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:28.219 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:28.219 03:17:31 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 97999670-48e0-48fe-9472-58957094d92b --l2p_dram_limit 10 -c nvc0n1p0 00:17:28.481 [2024-11-18 03:17:31.793370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.793504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.481 [2024-11-18 03:17:31.793521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.481 [2024-11-18 03:17:31.793529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.793579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.793589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.481 [2024-11-18 03:17:31.793595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:28.481 [2024-11-18 03:17:31.793604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.793625] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.481 [2024-11-18 03:17:31.793836] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.481 [2024-11-18 03:17:31.793847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.793854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.481 [2024-11-18 03:17:31.793862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:28.481 [2024-11-18 03:17:31.793869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.793918] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 71c08846-8ea5-4e44-8d58-39f764438c15 00:17:28.481 [2024-11-18 03:17:31.794896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.794917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:28.481 [2024-11-18 03:17:31.794926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:28.481 [2024-11-18 03:17:31.794933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.799606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.799632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.481 [2024-11-18 03:17:31.799640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.638 ms 00:17:28.481 [2024-11-18 03:17:31.799647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.799705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.799711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.481 [2024-11-18 03:17:31.799719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:28.481 [2024-11-18 03:17:31.799727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.799767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.799774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.481 [2024-11-18 03:17:31.799781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:28.481 [2024-11-18 03:17:31.799787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.799806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.481 [2024-11-18 03:17:31.801061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.801159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.481 [2024-11-18 03:17:31.801173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:17:28.481 [2024-11-18 03:17:31.801180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.801206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.801213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.481 [2024-11-18 03:17:31.801220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.481 [2024-11-18 03:17:31.801228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.801241] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:28.481 [2024-11-18 03:17:31.801361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.481 [2024-11-18 03:17:31.801371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.481 [2024-11-18 03:17:31.801380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.481 [2024-11-18 03:17:31.801388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.481 [2024-11-18 03:17:31.801396] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.481 [2024-11-18 03:17:31.801402] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:28.481 [2024-11-18 03:17:31.801411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.481 [2024-11-18 03:17:31.801420] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.481 [2024-11-18 03:17:31.801427] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.481 [2024-11-18 03:17:31.801435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.801441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.481 [2024-11-18 03:17:31.801447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:28.481 [2024-11-18 03:17:31.801454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.481 [2024-11-18 03:17:31.801518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.481 [2024-11-18 03:17:31.801526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.481 [2024-11-18 03:17:31.801532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:28.481 [2024-11-18 03:17:31.801538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.482 [2024-11-18 03:17:31.801612] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.482 [2024-11-18 03:17:31.801622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.482 [2024-11-18 03:17:31.801628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.482 [2024-11-18 03:17:31.801650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.482 [2024-11-18 03:17:31.801666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.482 [2024-11-18 03:17:31.801678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.482 [2024-11-18 03:17:31.801685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:28.482 [2024-11-18 03:17:31.801691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.482 [2024-11-18 03:17:31.801698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.482 [2024-11-18 03:17:31.801703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:28.482 [2024-11-18 03:17:31.801709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.482 [2024-11-18 03:17:31.801721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.482 [2024-11-18 03:17:31.801738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.482 [2024-11-18 03:17:31.801758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.482 [2024-11-18 03:17:31.801777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.482 [2024-11-18 03:17:31.801798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.482 [2024-11-18 03:17:31.801817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.482 [2024-11-18 03:17:31.801830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.482 [2024-11-18 03:17:31.801838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:28.482 [2024-11-18 03:17:31.801843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.482 [2024-11-18 03:17:31.801850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.482 [2024-11-18 03:17:31.801856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:28.482 [2024-11-18 03:17:31.801863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.482 [2024-11-18 03:17:31.801876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:28.482 [2024-11-18 03:17:31.801881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.482 [2024-11-18 03:17:31.801895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.482 [2024-11-18 03:17:31.801904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.482 [2024-11-18 03:17:31.801918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.482 [2024-11-18 03:17:31.801924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.482 [2024-11-18 03:17:31.801931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.482 [2024-11-18 03:17:31.801937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.482 [2024-11-18 03:17:31.801945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.482 [2024-11-18 03:17:31.801951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.482 [2024-11-18 03:17:31.801960] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.482 [2024-11-18 03:17:31.801968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.801979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:28.482 [2024-11-18 03:17:31.801986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:28.482 [2024-11-18 03:17:31.801993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:28.482 [2024-11-18 03:17:31.801999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:28.482 [2024-11-18 03:17:31.802007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:28.482 [2024-11-18 03:17:31.802014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:28.482 [2024-11-18 03:17:31.802023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:28.482 [2024-11-18 03:17:31.802029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:28.482 [2024-11-18 03:17:31.802036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:28.482 [2024-11-18 03:17:31.802042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.802050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.802056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.802063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.802070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:28.482 [2024-11-18 03:17:31.802077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.482 [2024-11-18 03:17:31.802085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.802094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.482 [2024-11-18 03:17:31.802101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.482 [2024-11-18 03:17:31.802108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.482 [2024-11-18 03:17:31.802114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.482 [2024-11-18 03:17:31.802122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.482 [2024-11-18 03:17:31.802128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.482 [2024-11-18 03:17:31.802136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:17:28.482 [2024-11-18 03:17:31.802141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.482 [2024-11-18 03:17:31.802173] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:28.482 [2024-11-18 03:17:31.802180] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:31.786 [2024-11-18 03:17:34.714109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.714178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:31.786 [2024-11-18 03:17:34.714199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2911.914 ms 00:17:31.786 [2024-11-18 03:17:34.714208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.724005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.724196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.786 [2024-11-18 03:17:34.724220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.705 ms 00:17:31.786 [2024-11-18 03:17:34.724228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.724333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.724343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.786 [2024-11-18 03:17:34.724363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:31.786 [2024-11-18 03:17:34.724370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.733203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.733245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.786 [2024-11-18 03:17:34.733258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.781 ms 00:17:31.786 [2024-11-18 03:17:34.733265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.733298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.733338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.786 [2024-11-18 03:17:34.733350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:31.786 [2024-11-18 03:17:34.733358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.733729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.733746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.786 [2024-11-18 03:17:34.733757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:17:31.786 [2024-11-18 03:17:34.733764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.733874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.733884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.786 [2024-11-18 03:17:34.733900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:31.786 [2024-11-18 03:17:34.733909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.754409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.754516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.786 [2024-11-18 03:17:34.754553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.467 ms 00:17:31.786 [2024-11-18 03:17:34.754575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.765148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:31.786 [2024-11-18 03:17:34.768461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.768502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.786 [2024-11-18 03:17:34.768513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.679 ms 00:17:31.786 [2024-11-18 03:17:34.768524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.845717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.845792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:31.786 [2024-11-18 03:17:34.845807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.156 ms 00:17:31.786 [2024-11-18 03:17:34.845822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.846036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.846052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.786 [2024-11-18 03:17:34.846061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:31.786 [2024-11-18 03:17:34.846071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.786 [2024-11-18 03:17:34.852271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.786 [2024-11-18 03:17:34.852350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:31.786 [2024-11-18 03:17:34.852363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.161 ms 00:17:31.786 [2024-11-18 03:17:34.852374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.857720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.857774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:31.787 [2024-11-18 03:17:34.857785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:17:31.787 [2024-11-18 03:17:34.857794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.858133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.858146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.787 [2024-11-18 03:17:34.858155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:31.787 [2024-11-18 03:17:34.858168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.898676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.898756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:31.787 [2024-11-18 03:17:34.898769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.485 ms 00:17:31.787 [2024-11-18 03:17:34.898780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.905891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.905952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:31.787 [2024-11-18 03:17:34.905964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:17:31.787 [2024-11-18 03:17:34.905975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.912119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.912176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:31.787 [2024-11-18 03:17:34.912187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:17:31.787 [2024-11-18 03:17:34.912196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.918646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.918705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.787 [2024-11-18 03:17:34.918715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.402 ms 00:17:31.787 [2024-11-18 03:17:34.918727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.918780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.918798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.787 [2024-11-18 03:17:34.918807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:31.787 [2024-11-18 03:17:34.918817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.918890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.787 [2024-11-18 03:17:34.918903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.787 [2024-11-18 03:17:34.918911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:31.787 [2024-11-18 03:17:34.918928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.787 [2024-11-18 03:17:34.920008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3126.166 ms, result 0 00:17:31.787 { 00:17:31.787 "name": "ftl0", 00:17:31.787 "uuid": "71c08846-8ea5-4e44-8d58-39f764438c15" 00:17:31.787 } 00:17:31.787 03:17:34 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:31.787 03:17:34 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:31.787 03:17:35 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:31.787 03:17:35 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:32.049 [2024-11-18 03:17:35.365790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.365835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.049 [2024-11-18 03:17:35.365855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.049 [2024-11-18 03:17:35.365864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.365891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.049 [2024-11-18 03:17:35.366384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.366407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.049 [2024-11-18 03:17:35.366416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:17:32.049 [2024-11-18 03:17:35.366429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.366692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.366705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.049 [2024-11-18 03:17:35.366714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:32.049 [2024-11-18 03:17:35.366724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.369961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.369985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.049 [2024-11-18 03:17:35.369994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:17:32.049 [2024-11-18 03:17:35.370003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.376211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.376244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:32.049 [2024-11-18 03:17:35.376254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:17:32.049 [2024-11-18 03:17:35.376263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.378749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.378792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.049 [2024-11-18 03:17:35.378801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.406 ms 00:17:32.049 [2024-11-18 03:17:35.378813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.383300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.383356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.049 [2024-11-18 03:17:35.383366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.454 ms 00:17:32.049 [2024-11-18 03:17:35.383375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.383492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.383508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.049 [2024-11-18 03:17:35.383517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:32.049 [2024-11-18 03:17:35.383525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.385900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.385935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.049 [2024-11-18 03:17:35.385944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:17:32.049 [2024-11-18 03:17:35.385952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.387876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.387915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.049 [2024-11-18 03:17:35.387923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:17:32.049 [2024-11-18 03:17:35.387932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.389671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.389706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.049 [2024-11-18 03:17:35.389714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:17:32.049 [2024-11-18 03:17:35.389723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.049 [2024-11-18 03:17:35.391530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.049 [2024-11-18 03:17:35.391564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.050 [2024-11-18 03:17:35.391572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:17:32.050 [2024-11-18 03:17:35.391583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.050 [2024-11-18 03:17:35.391613] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.050 [2024-11-18 03:17:35.391629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.391992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.050 [2024-11-18 03:17:35.392239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.051 [2024-11-18 03:17:35.392512] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.051 [2024-11-18 03:17:35.392522] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71c08846-8ea5-4e44-8d58-39f764438c15 00:17:32.051 [2024-11-18 03:17:35.392531] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.051 [2024-11-18 03:17:35.392539] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.051 [2024-11-18 03:17:35.392548] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.051 [2024-11-18 03:17:35.392558] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.051 [2024-11-18 03:17:35.392567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.051 [2024-11-18 03:17:35.392577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.051 [2024-11-18 03:17:35.392586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.051 [2024-11-18 03:17:35.392593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.051 [2024-11-18 03:17:35.392601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.051 [2024-11-18 03:17:35.392608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.051 [2024-11-18 03:17:35.392619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.051 [2024-11-18 03:17:35.392628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:17:32.051 [2024-11-18 03:17:35.392636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.394109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.051 [2024-11-18 03:17:35.394131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.051 [2024-11-18 03:17:35.394140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:17:32.051 [2024-11-18 03:17:35.394149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.394231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.051 [2024-11-18 03:17:35.394241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.051 [2024-11-18 03:17:35.394249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:32.051 [2024-11-18 03:17:35.394258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.399595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.399631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.051 [2024-11-18 03:17:35.399640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.399650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.399704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.399714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.051 [2024-11-18 03:17:35.399721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.399734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.399802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.399817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.051 [2024-11-18 03:17:35.399825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.399834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.399853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.399864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.051 [2024-11-18 03:17:35.399872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.399881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.408727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.408769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.051 [2024-11-18 03:17:35.408779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.408789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.416660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.416705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.051 [2024-11-18 03:17:35.416715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.416727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.416775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.416787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.051 [2024-11-18 03:17:35.416795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.416804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.416855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.416866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.051 [2024-11-18 03:17:35.416875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.051 [2024-11-18 03:17:35.416884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.051 [2024-11-18 03:17:35.416953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.051 [2024-11-18 03:17:35.416964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.052 [2024-11-18 03:17:35.416972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.052 [2024-11-18 03:17:35.416981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.052 [2024-11-18 03:17:35.417009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.052 [2024-11-18 03:17:35.417020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.052 [2024-11-18 03:17:35.417028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.052 [2024-11-18 03:17:35.417039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.052 [2024-11-18 03:17:35.417073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.052 [2024-11-18 03:17:35.417085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.052 [2024-11-18 03:17:35.417092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.052 [2024-11-18 03:17:35.417101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.052 [2024-11-18 03:17:35.417141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.052 [2024-11-18 03:17:35.417157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.052 [2024-11-18 03:17:35.417167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.052 [2024-11-18 03:17:35.417176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.052 [2024-11-18 03:17:35.417296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.482 ms, result 0 00:17:32.052 true 00:17:32.052 03:17:35 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86166 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86166 ']' 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86166 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86166 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.052 killing process with pid 86166 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86166' 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86166 00:17:32.052 03:17:35 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86166 00:17:37.346 03:17:40 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:40.649 262144+0 records in 00:17:40.649 262144+0 records out 00:17:40.649 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62221 s, 296 MB/s 00:17:40.649 03:17:43 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:42.566 03:17:45 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:42.566 [2024-11-18 03:17:45.982178] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:42.566 [2024-11-18 03:17:45.982343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86375 ] 00:17:42.566 [2024-11-18 03:17:46.132425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.828 [2024-11-18 03:17:46.167966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.828 [2024-11-18 03:17:46.272760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.828 [2024-11-18 03:17:46.272848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.092 [2024-11-18 03:17:46.433461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.433523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.092 [2024-11-18 03:17:46.433541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.092 [2024-11-18 03:17:46.433550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.433610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.433622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.092 [2024-11-18 03:17:46.433631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:43.092 [2024-11-18 03:17:46.433639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.433660] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.092 [2024-11-18 03:17:46.433941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.092 [2024-11-18 03:17:46.433958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.433967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.092 [2024-11-18 03:17:46.433979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:43.092 [2024-11-18 03:17:46.433991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.436106] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.092 [2024-11-18 03:17:46.439685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.439746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.092 [2024-11-18 03:17:46.439764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:17:43.092 [2024-11-18 03:17:46.439772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.439852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.439863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.092 [2024-11-18 03:17:46.439875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:43.092 [2024-11-18 03:17:46.439883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.447857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.447900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.092 [2024-11-18 03:17:46.447911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.923 ms 00:17:43.092 [2024-11-18 03:17:46.447923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.448028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.448039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.092 [2024-11-18 03:17:46.448048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:43.092 [2024-11-18 03:17:46.448057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.448116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.448126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.092 [2024-11-18 03:17:46.448135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:43.092 [2024-11-18 03:17:46.448142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.448174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:43.092 [2024-11-18 03:17:46.450196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.450245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.092 [2024-11-18 03:17:46.450255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:17:43.092 [2024-11-18 03:17:46.450263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.450298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.450325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.092 [2024-11-18 03:17:46.450343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.092 [2024-11-18 03:17:46.450351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.450373] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.092 [2024-11-18 03:17:46.450400] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.092 [2024-11-18 03:17:46.450437] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.092 [2024-11-18 03:17:46.450485] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.092 [2024-11-18 03:17:46.450593] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.092 [2024-11-18 03:17:46.450604] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.092 [2024-11-18 03:17:46.450615] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.092 [2024-11-18 03:17:46.450626] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.092 [2024-11-18 03:17:46.450639] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.092 [2024-11-18 03:17:46.450647] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:43.092 [2024-11-18 03:17:46.450654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.092 [2024-11-18 03:17:46.450662] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.092 [2024-11-18 03:17:46.450670] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.092 [2024-11-18 03:17:46.450678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.450686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.092 [2024-11-18 03:17:46.450694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:43.092 [2024-11-18 03:17:46.450701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.450787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.092 [2024-11-18 03:17:46.450798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.092 [2024-11-18 03:17:46.450809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:43.092 [2024-11-18 03:17:46.450816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.092 [2024-11-18 03:17:46.450917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.092 [2024-11-18 03:17:46.450930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.092 [2024-11-18 03:17:46.450939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.092 [2024-11-18 03:17:46.450964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.092 [2024-11-18 03:17:46.450972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.092 [2024-11-18 03:17:46.450980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.092 [2024-11-18 03:17:46.450989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:43.092 [2024-11-18 03:17:46.450997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.092 [2024-11-18 03:17:46.451005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.092 [2024-11-18 03:17:46.451020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.092 [2024-11-18 03:17:46.451028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:43.092 [2024-11-18 03:17:46.451039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.092 [2024-11-18 03:17:46.451047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.092 [2024-11-18 03:17:46.451055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:43.092 [2024-11-18 03:17:46.451066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.092 [2024-11-18 03:17:46.451082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:43.092 [2024-11-18 03:17:46.451090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.092 [2024-11-18 03:17:46.451106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.092 [2024-11-18 03:17:46.451122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.092 [2024-11-18 03:17:46.451131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.092 [2024-11-18 03:17:46.451146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.092 [2024-11-18 03:17:46.451154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.092 [2024-11-18 03:17:46.451175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.092 [2024-11-18 03:17:46.451183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.092 [2024-11-18 03:17:46.451199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.092 [2024-11-18 03:17:46.451207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:43.092 [2024-11-18 03:17:46.451214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.092 [2024-11-18 03:17:46.451221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.093 [2024-11-18 03:17:46.451228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:43.093 [2024-11-18 03:17:46.451236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.093 [2024-11-18 03:17:46.451246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.093 [2024-11-18 03:17:46.451253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:43.093 [2024-11-18 03:17:46.451260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.093 [2024-11-18 03:17:46.451268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.093 [2024-11-18 03:17:46.451276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:43.093 [2024-11-18 03:17:46.451283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.093 [2024-11-18 03:17:46.451292] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.093 [2024-11-18 03:17:46.451303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.093 [2024-11-18 03:17:46.451326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.093 [2024-11-18 03:17:46.451336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.093 [2024-11-18 03:17:46.451346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.093 [2024-11-18 03:17:46.451354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.093 [2024-11-18 03:17:46.451361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.093 [2024-11-18 03:17:46.451369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.093 [2024-11-18 03:17:46.451375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.093 [2024-11-18 03:17:46.451383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.093 [2024-11-18 03:17:46.451392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.093 [2024-11-18 03:17:46.451402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:43.093 [2024-11-18 03:17:46.451423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:43.093 [2024-11-18 03:17:46.451430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:43.093 [2024-11-18 03:17:46.451438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:43.093 [2024-11-18 03:17:46.451446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:43.093 [2024-11-18 03:17:46.451456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:43.093 [2024-11-18 03:17:46.451464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:43.093 [2024-11-18 03:17:46.451472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:43.093 [2024-11-18 03:17:46.451480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:43.093 [2024-11-18 03:17:46.451487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:43.093 [2024-11-18 03:17:46.451524] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.093 [2024-11-18 03:17:46.451533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.093 [2024-11-18 03:17:46.451548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.093 [2024-11-18 03:17:46.451555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.093 [2024-11-18 03:17:46.451562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.093 [2024-11-18 03:17:46.451569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.451583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.093 [2024-11-18 03:17:46.451591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:17:43.093 [2024-11-18 03:17:46.451598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.477460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.477557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.093 [2024-11-18 03:17:46.477602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.810 ms 00:17:43.093 [2024-11-18 03:17:46.477625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.477862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.477888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.093 [2024-11-18 03:17:46.477925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:43.093 [2024-11-18 03:17:46.477946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.490511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.490559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.093 [2024-11-18 03:17:46.490571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.426 ms 00:17:43.093 [2024-11-18 03:17:46.490578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.490613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.490622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.093 [2024-11-18 03:17:46.490631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:43.093 [2024-11-18 03:17:46.490639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.491198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.491226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.093 [2024-11-18 03:17:46.491238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:43.093 [2024-11-18 03:17:46.491246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.491414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.491426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.093 [2024-11-18 03:17:46.491436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:43.093 [2024-11-18 03:17:46.491445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.498132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.498175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.093 [2024-11-18 03:17:46.498192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.662 ms 00:17:43.093 [2024-11-18 03:17:46.498200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.501977] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:43.093 [2024-11-18 03:17:46.502031] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:43.093 [2024-11-18 03:17:46.502044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.502052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:43.093 [2024-11-18 03:17:46.502061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:17:43.093 [2024-11-18 03:17:46.502068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.517917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.517969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:43.093 [2024-11-18 03:17:46.517981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.797 ms 00:17:43.093 [2024-11-18 03:17:46.517992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.521023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.521072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:43.093 [2024-11-18 03:17:46.521083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:17:43.093 [2024-11-18 03:17:46.521090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.523787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.523838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:43.093 [2024-11-18 03:17:46.523848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.650 ms 00:17:43.093 [2024-11-18 03:17:46.523856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.524218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.524238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.093 [2024-11-18 03:17:46.524248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:43.093 [2024-11-18 03:17:46.524255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.548135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.548213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:43.093 [2024-11-18 03:17:46.548232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.860 ms 00:17:43.093 [2024-11-18 03:17:46.548240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.093 [2024-11-18 03:17:46.556302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:43.093 [2024-11-18 03:17:46.559419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.093 [2024-11-18 03:17:46.559467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:43.094 [2024-11-18 03:17:46.559480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.124 ms 00:17:43.094 [2024-11-18 03:17:46.559496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.559575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.094 [2024-11-18 03:17:46.559586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:43.094 [2024-11-18 03:17:46.559596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:43.094 [2024-11-18 03:17:46.559605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.559675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.094 [2024-11-18 03:17:46.559686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:43.094 [2024-11-18 03:17:46.559695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:43.094 [2024-11-18 03:17:46.559703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.559728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.094 [2024-11-18 03:17:46.559737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:43.094 [2024-11-18 03:17:46.559746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:43.094 [2024-11-18 03:17:46.559754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.559797] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:43.094 [2024-11-18 03:17:46.559814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.094 [2024-11-18 03:17:46.559822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:43.094 [2024-11-18 03:17:46.559831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:43.094 [2024-11-18 03:17:46.559839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.565465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.094 [2024-11-18 03:17:46.565510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:43.094 [2024-11-18 03:17:46.565528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.607 ms 00:17:43.094 [2024-11-18 03:17:46.565541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.565628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.094 [2024-11-18 03:17:46.565638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:43.094 [2024-11-18 03:17:46.565648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:43.094 [2024-11-18 03:17:46.565656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.094 [2024-11-18 03:17:46.567247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.330 ms, result 0 00:17:44.037  [2024-11-18T03:17:49.001Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T03:17:49.944Z] Copying: 23/1024 [MB] (13 MBps) [2024-11-18T03:17:50.889Z] Copying: 54/1024 [MB] (30 MBps) [2024-11-18T03:17:51.903Z] Copying: 73/1024 [MB] (19 MBps) [2024-11-18T03:17:52.852Z] Copying: 89/1024 [MB] (15 MBps) [2024-11-18T03:17:53.796Z] Copying: 106/1024 [MB] (17 MBps) [2024-11-18T03:17:54.738Z] Copying: 132/1024 [MB] (26 MBps) [2024-11-18T03:17:55.682Z] Copying: 187/1024 [MB] (54 MBps) [2024-11-18T03:17:56.625Z] Copying: 216/1024 [MB] (29 MBps) [2024-11-18T03:17:58.012Z] Copying: 235/1024 [MB] (18 MBps) [2024-11-18T03:17:58.584Z] Copying: 256/1024 [MB] (20 MBps) [2024-11-18T03:17:59.973Z] Copying: 277/1024 [MB] (20 MBps) [2024-11-18T03:18:00.917Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-18T03:18:01.862Z] Copying: 311928/1048576 [kB] (10056 kBps) [2024-11-18T03:18:02.803Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-18T03:18:03.748Z] Copying: 334/1024 [MB] (18 MBps) [2024-11-18T03:18:04.692Z] Copying: 355/1024 [MB] (21 MBps) [2024-11-18T03:18:05.635Z] Copying: 405/1024 [MB] (50 MBps) [2024-11-18T03:18:07.021Z] Copying: 421/1024 [MB] (15 MBps) [2024-11-18T03:18:07.592Z] Copying: 434/1024 [MB] (13 MBps) [2024-11-18T03:18:08.980Z] Copying: 450/1024 [MB] (15 MBps) [2024-11-18T03:18:09.924Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-18T03:18:10.868Z] Copying: 475/1024 [MB] (14 MBps) [2024-11-18T03:18:11.810Z] Copying: 496/1024 [MB] (20 MBps) [2024-11-18T03:18:12.754Z] Copying: 517/1024 [MB] (21 MBps) [2024-11-18T03:18:13.730Z] Copying: 534/1024 [MB] (16 MBps) [2024-11-18T03:18:14.673Z] Copying: 554/1024 [MB] (20 MBps) [2024-11-18T03:18:15.616Z] Copying: 572/1024 [MB] (18 MBps) [2024-11-18T03:18:17.003Z] Copying: 592/1024 [MB] (19 MBps) [2024-11-18T03:18:17.948Z] Copying: 609/1024 [MB] (17 MBps) [2024-11-18T03:18:18.890Z] Copying: 627/1024 [MB] (17 MBps) [2024-11-18T03:18:19.833Z] Copying: 639/1024 [MB] (11 MBps) [2024-11-18T03:18:20.778Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-18T03:18:21.754Z] Copying: 660/1024 [MB] (10 MBps) [2024-11-18T03:18:22.698Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-18T03:18:23.642Z] Copying: 683/1024 [MB] (13 MBps) [2024-11-18T03:18:24.587Z] Copying: 702/1024 [MB] (18 MBps) [2024-11-18T03:18:25.972Z] Copying: 716/1024 [MB] (14 MBps) [2024-11-18T03:18:26.915Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-18T03:18:27.858Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-18T03:18:28.801Z] Copying: 747/1024 [MB] (10 MBps) [2024-11-18T03:18:29.741Z] Copying: 776/1024 [MB] (28 MBps) [2024-11-18T03:18:30.686Z] Copying: 811/1024 [MB] (34 MBps) [2024-11-18T03:18:31.629Z] Copying: 824/1024 [MB] (13 MBps) [2024-11-18T03:18:33.015Z] Copying: 841/1024 [MB] (16 MBps) [2024-11-18T03:18:33.587Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-18T03:18:34.972Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-18T03:18:35.916Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-18T03:18:36.857Z] Copying: 882/1024 [MB] (10 MBps) [2024-11-18T03:18:37.800Z] Copying: 892/1024 [MB] (10 MBps) [2024-11-18T03:18:38.744Z] Copying: 909/1024 [MB] (17 MBps) [2024-11-18T03:18:39.688Z] Copying: 919/1024 [MB] (10 MBps) [2024-11-18T03:18:40.631Z] Copying: 932/1024 [MB] (12 MBps) [2024-11-18T03:18:42.017Z] Copying: 952/1024 [MB] (19 MBps) [2024-11-18T03:18:42.589Z] Copying: 970/1024 [MB] (18 MBps) [2024-11-18T03:18:43.976Z] Copying: 985/1024 [MB] (14 MBps) [2024-11-18T03:18:44.919Z] Copying: 998/1024 [MB] (13 MBps) [2024-11-18T03:18:45.492Z] Copying: 1011/1024 [MB] (13 MBps) [2024-11-18T03:18:45.492Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-18 03:18:45.349097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.349184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:41.915 [2024-11-18 03:18:45.349204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.915 [2024-11-18 03:18:45.349215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.349241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.915 [2024-11-18 03:18:45.350266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.350333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:41.915 [2024-11-18 03:18:45.350346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:18:41.915 [2024-11-18 03:18:45.350358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.352564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.352619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:41.915 [2024-11-18 03:18:45.352631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:18:41.915 [2024-11-18 03:18:45.352640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.371117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.371183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:41.915 [2024-11-18 03:18:45.371198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.457 ms 00:18:41.915 [2024-11-18 03:18:45.371207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.377397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.377442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:41.915 [2024-11-18 03:18:45.377470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:18:41.915 [2024-11-18 03:18:45.377479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.380377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.380429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:41.915 [2024-11-18 03:18:45.380441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:18:41.915 [2024-11-18 03:18:45.380451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.387241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.387327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:41.915 [2024-11-18 03:18:45.387341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.741 ms 00:18:41.915 [2024-11-18 03:18:45.387351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.387492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.387506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:41.915 [2024-11-18 03:18:45.387532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:41.915 [2024-11-18 03:18:45.387542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.391879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.391937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:41.915 [2024-11-18 03:18:45.391949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.314 ms 00:18:41.915 [2024-11-18 03:18:45.391960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.395250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.395335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:41.915 [2024-11-18 03:18:45.395347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:18:41.915 [2024-11-18 03:18:45.395356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.397871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.397922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:41.915 [2024-11-18 03:18:45.397933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:18:41.915 [2024-11-18 03:18:45.397941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.400473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.915 [2024-11-18 03:18:45.400525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:41.915 [2024-11-18 03:18:45.400535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:18:41.915 [2024-11-18 03:18:45.400543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.915 [2024-11-18 03:18:45.400586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:41.915 [2024-11-18 03:18:45.400604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:41.915 [2024-11-18 03:18:45.400623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:41.915 [2024-11-18 03:18:45.400633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:41.915 [2024-11-18 03:18:45.400641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:41.915 [2024-11-18 03:18:45.400650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:41.915 [2024-11-18 03:18:45.400658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.400995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:41.916 [2024-11-18 03:18:45.401405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:41.917 [2024-11-18 03:18:45.401480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:41.917 [2024-11-18 03:18:45.401491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71c08846-8ea5-4e44-8d58-39f764438c15 00:18:41.917 [2024-11-18 03:18:45.401501] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:41.917 [2024-11-18 03:18:45.401509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:41.917 [2024-11-18 03:18:45.401517] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:41.917 [2024-11-18 03:18:45.401527] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:41.917 [2024-11-18 03:18:45.401536] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:41.917 [2024-11-18 03:18:45.401551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:41.917 [2024-11-18 03:18:45.401561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:41.917 [2024-11-18 03:18:45.401568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:41.917 [2024-11-18 03:18:45.401576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:41.917 [2024-11-18 03:18:45.401584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.917 [2024-11-18 03:18:45.401594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:41.917 [2024-11-18 03:18:45.401613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:18:41.917 [2024-11-18 03:18:45.401640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.404995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.917 [2024-11-18 03:18:45.405049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:41.917 [2024-11-18 03:18:45.405063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:18:41.917 [2024-11-18 03:18:45.405073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.405232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.917 [2024-11-18 03:18:45.405245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:41.917 [2024-11-18 03:18:45.405265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:18:41.917 [2024-11-18 03:18:45.405275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.415039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.415095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.917 [2024-11-18 03:18:45.415115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.415124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.415190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.415200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.917 [2024-11-18 03:18:45.415212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.415221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.415292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.415304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.917 [2024-11-18 03:18:45.415385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.415396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.415415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.415425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.917 [2024-11-18 03:18:45.415436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.415450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.434262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.434326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.917 [2024-11-18 03:18:45.434349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.434359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.917 [2024-11-18 03:18:45.449172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.917 [2024-11-18 03:18:45.449287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.917 [2024-11-18 03:18:45.449382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.917 [2024-11-18 03:18:45.449509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:41.917 [2024-11-18 03:18:45.449573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.917 [2024-11-18 03:18:45.449663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:41.917 [2024-11-18 03:18:45.449741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.917 [2024-11-18 03:18:45.449753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:41.917 [2024-11-18 03:18:45.449764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.917 [2024-11-18 03:18:45.449932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 100.795 ms, result 0 00:18:42.490 00:18:42.490 00:18:42.490 03:18:45 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:42.490 [2024-11-18 03:18:46.031218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:42.490 [2024-11-18 03:18:46.031393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87003 ] 00:18:42.751 [2024-11-18 03:18:46.186689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.751 [2024-11-18 03:18:46.261145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.012 [2024-11-18 03:18:46.414169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.012 [2024-11-18 03:18:46.414268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.012 [2024-11-18 03:18:46.578511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-18 03:18:46.578574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.013 [2024-11-18 03:18:46.578595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:43.013 [2024-11-18 03:18:46.578605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-18 03:18:46.578664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-18 03:18:46.578679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.013 [2024-11-18 03:18:46.578689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:43.013 [2024-11-18 03:18:46.578705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-18 03:18:46.578733] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.013 [2024-11-18 03:18:46.579085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.013 [2024-11-18 03:18:46.579128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-18 03:18:46.579142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.013 [2024-11-18 03:18:46.579156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:18:43.013 [2024-11-18 03:18:46.579170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-18 03:18:46.581566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.013 [2024-11-18 03:18:46.586499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-18 03:18:46.586562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.013 [2024-11-18 03:18:46.586581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.934 ms 00:18:43.013 [2024-11-18 03:18:46.586595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.013 [2024-11-18 03:18:46.586701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.013 [2024-11-18 03:18:46.586714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.276 [2024-11-18 03:18:46.586727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:43.276 [2024-11-18 03:18:46.586744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.598837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.598888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.276 [2024-11-18 03:18:46.598902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.041 ms 00:18:43.276 [2024-11-18 03:18:46.598916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.599024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.599035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.276 [2024-11-18 03:18:46.599049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:43.276 [2024-11-18 03:18:46.599058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.599129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.599142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.276 [2024-11-18 03:18:46.599152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:43.276 [2024-11-18 03:18:46.599169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.599198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.276 [2024-11-18 03:18:46.601963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.602005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.276 [2024-11-18 03:18:46.602016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:18:43.276 [2024-11-18 03:18:46.602025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.602064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.602073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.276 [2024-11-18 03:18:46.602083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:43.276 [2024-11-18 03:18:46.602092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.602117] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.276 [2024-11-18 03:18:46.602154] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.276 [2024-11-18 03:18:46.602204] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.276 [2024-11-18 03:18:46.602228] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:43.276 [2024-11-18 03:18:46.602360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.276 [2024-11-18 03:18:46.602377] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.276 [2024-11-18 03:18:46.602389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:43.276 [2024-11-18 03:18:46.602406] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.276 [2024-11-18 03:18:46.602420] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.276 [2024-11-18 03:18:46.602428] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:43.276 [2024-11-18 03:18:46.602437] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.276 [2024-11-18 03:18:46.602447] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.276 [2024-11-18 03:18:46.602482] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.276 [2024-11-18 03:18:46.602496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.602508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.276 [2024-11-18 03:18:46.602524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:18:43.276 [2024-11-18 03:18:46.602538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.602665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.276 [2024-11-18 03:18:46.602687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.276 [2024-11-18 03:18:46.602703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:43.276 [2024-11-18 03:18:46.602719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.276 [2024-11-18 03:18:46.602846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.276 [2024-11-18 03:18:46.602870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.276 [2024-11-18 03:18:46.602881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.276 [2024-11-18 03:18:46.602899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.276 [2024-11-18 03:18:46.602909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.276 [2024-11-18 03:18:46.602919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.276 [2024-11-18 03:18:46.602928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:43.276 [2024-11-18 03:18:46.602937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.276 [2024-11-18 03:18:46.602947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:43.276 [2024-11-18 03:18:46.602955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.276 [2024-11-18 03:18:46.602963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.276 [2024-11-18 03:18:46.602971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:43.276 [2024-11-18 03:18:46.602985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.276 [2024-11-18 03:18:46.602993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.276 [2024-11-18 03:18:46.603001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:43.276 [2024-11-18 03:18:46.603010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.276 [2024-11-18 03:18:46.603020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.276 [2024-11-18 03:18:46.603030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:43.276 [2024-11-18 03:18:46.603038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.276 [2024-11-18 03:18:46.603047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.276 [2024-11-18 03:18:46.603055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:43.276 [2024-11-18 03:18:46.603063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.276 [2024-11-18 03:18:46.603070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.276 [2024-11-18 03:18:46.603078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:43.276 [2024-11-18 03:18:46.603084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.277 [2024-11-18 03:18:46.603092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.277 [2024-11-18 03:18:46.603100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:43.277 [2024-11-18 03:18:46.603106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.277 [2024-11-18 03:18:46.603119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.277 [2024-11-18 03:18:46.603126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:43.277 [2024-11-18 03:18:46.603134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.277 [2024-11-18 03:18:46.603141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.277 [2024-11-18 03:18:46.603151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:43.277 [2024-11-18 03:18:46.603158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.277 [2024-11-18 03:18:46.603165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.277 [2024-11-18 03:18:46.603172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:43.277 [2024-11-18 03:18:46.603182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.277 [2024-11-18 03:18:46.603190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.277 [2024-11-18 03:18:46.603198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:43.277 [2024-11-18 03:18:46.603205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.277 [2024-11-18 03:18:46.603213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.277 [2024-11-18 03:18:46.603221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:43.277 [2024-11-18 03:18:46.603228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.277 [2024-11-18 03:18:46.603235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.277 [2024-11-18 03:18:46.603247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.277 [2024-11-18 03:18:46.603255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.277 [2024-11-18 03:18:46.603267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.277 [2024-11-18 03:18:46.603276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.277 [2024-11-18 03:18:46.603284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.277 [2024-11-18 03:18:46.603291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.277 [2024-11-18 03:18:46.603298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.277 [2024-11-18 03:18:46.603305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.277 [2024-11-18 03:18:46.603351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.277 [2024-11-18 03:18:46.603362] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.277 [2024-11-18 03:18:46.603375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:43.277 [2024-11-18 03:18:46.603397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:43.277 [2024-11-18 03:18:46.603406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:43.277 [2024-11-18 03:18:46.603415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:43.277 [2024-11-18 03:18:46.603423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:43.277 [2024-11-18 03:18:46.603433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:43.277 [2024-11-18 03:18:46.603442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:43.277 [2024-11-18 03:18:46.603451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:43.277 [2024-11-18 03:18:46.603462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:43.277 [2024-11-18 03:18:46.603471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:43.277 [2024-11-18 03:18:46.603512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.277 [2024-11-18 03:18:46.603522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.277 [2024-11-18 03:18:46.603538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.277 [2024-11-18 03:18:46.603545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.277 [2024-11-18 03:18:46.603553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.277 [2024-11-18 03:18:46.603562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.603573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.277 [2024-11-18 03:18:46.603581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:18:43.277 [2024-11-18 03:18:46.603589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.635599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.635679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.277 [2024-11-18 03:18:46.635701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.939 ms 00:18:43.277 [2024-11-18 03:18:46.635712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.635832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.635845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.277 [2024-11-18 03:18:46.635865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:43.277 [2024-11-18 03:18:46.635875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.652478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.652530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.277 [2024-11-18 03:18:46.652544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.519 ms 00:18:43.277 [2024-11-18 03:18:46.652553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.652596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.652607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.277 [2024-11-18 03:18:46.652617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.277 [2024-11-18 03:18:46.652627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.653422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.653472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.277 [2024-11-18 03:18:46.653484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:18:43.277 [2024-11-18 03:18:46.653493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.653665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.653678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.277 [2024-11-18 03:18:46.653688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:18:43.277 [2024-11-18 03:18:46.653697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.663804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.663853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.277 [2024-11-18 03:18:46.663874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.079 ms 00:18:43.277 [2024-11-18 03:18:46.663883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.669066] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:43.277 [2024-11-18 03:18:46.669126] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.277 [2024-11-18 03:18:46.669142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.669151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.277 [2024-11-18 03:18:46.669161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.128 ms 00:18:43.277 [2024-11-18 03:18:46.669169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.686797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.686866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.277 [2024-11-18 03:18:46.686878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:18:43.277 [2024-11-18 03:18:46.686887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.690697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.690758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.277 [2024-11-18 03:18:46.690774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:18:43.277 [2024-11-18 03:18:46.690787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.277 [2024-11-18 03:18:46.694013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.277 [2024-11-18 03:18:46.694061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.277 [2024-11-18 03:18:46.694072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:18:43.277 [2024-11-18 03:18:46.694081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.694545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.694584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.278 [2024-11-18 03:18:46.694603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:18:43.278 [2024-11-18 03:18:46.694617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.728557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.728623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:43.278 [2024-11-18 03:18:46.728637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.900 ms 00:18:43.278 [2024-11-18 03:18:46.728646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.736884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:43.278 [2024-11-18 03:18:46.740493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.740545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.278 [2024-11-18 03:18:46.740561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.783 ms 00:18:43.278 [2024-11-18 03:18:46.740573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.740655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.740667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:43.278 [2024-11-18 03:18:46.740677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:43.278 [2024-11-18 03:18:46.740686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.740760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.740773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.278 [2024-11-18 03:18:46.740783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:43.278 [2024-11-18 03:18:46.740795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.740818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.740827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.278 [2024-11-18 03:18:46.740836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:43.278 [2024-11-18 03:18:46.740844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.740886] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:43.278 [2024-11-18 03:18:46.740897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.740908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:43.278 [2024-11-18 03:18:46.740922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:43.278 [2024-11-18 03:18:46.740931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.747452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.747501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.278 [2024-11-18 03:18:46.747515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.499 ms 00:18:43.278 [2024-11-18 03:18:46.747524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.747622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.278 [2024-11-18 03:18:46.747634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.278 [2024-11-18 03:18:46.747649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:43.278 [2024-11-18 03:18:46.747665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.278 [2024-11-18 03:18:46.749601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 170.544 ms, result 0 00:18:44.662  [2024-11-18T03:18:49.180Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-18T03:18:50.150Z] Copying: 26/1024 [MB] (10 MBps) [2024-11-18T03:18:51.086Z] Copying: 38/1024 [MB] (11 MBps) [2024-11-18T03:18:52.022Z] Copying: 49/1024 [MB] (11 MBps) [2024-11-18T03:18:52.962Z] Copying: 60/1024 [MB] (11 MBps) [2024-11-18T03:18:54.343Z] Copying: 71/1024 [MB] (10 MBps) [2024-11-18T03:18:55.279Z] Copying: 83/1024 [MB] (11 MBps) [2024-11-18T03:18:56.217Z] Copying: 94/1024 [MB] (11 MBps) [2024-11-18T03:18:57.155Z] Copying: 106/1024 [MB] (11 MBps) [2024-11-18T03:18:58.095Z] Copying: 118/1024 [MB] (12 MBps) [2024-11-18T03:18:59.030Z] Copying: 129/1024 [MB] (11 MBps) [2024-11-18T03:18:59.965Z] Copying: 141/1024 [MB] (11 MBps) [2024-11-18T03:19:01.342Z] Copying: 153/1024 [MB] (11 MBps) [2024-11-18T03:19:02.287Z] Copying: 164/1024 [MB] (11 MBps) [2024-11-18T03:19:03.222Z] Copying: 175/1024 [MB] (11 MBps) [2024-11-18T03:19:04.156Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-18T03:19:05.093Z] Copying: 199/1024 [MB] (12 MBps) [2024-11-18T03:19:06.034Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-18T03:19:06.969Z] Copying: 222/1024 [MB] (11 MBps) [2024-11-18T03:19:08.351Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-18T03:19:09.287Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-18T03:19:10.226Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-18T03:19:11.164Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-18T03:19:12.098Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-18T03:19:13.033Z] Copying: 291/1024 [MB] (11 MBps) [2024-11-18T03:19:13.969Z] Copying: 303/1024 [MB] (11 MBps) [2024-11-18T03:19:15.344Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-18T03:19:16.283Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-18T03:19:17.221Z] Copying: 338/1024 [MB] (11 MBps) [2024-11-18T03:19:18.164Z] Copying: 350/1024 [MB] (11 MBps) [2024-11-18T03:19:19.159Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-18T03:19:20.092Z] Copying: 372/1024 [MB] (11 MBps) [2024-11-18T03:19:21.032Z] Copying: 384/1024 [MB] (11 MBps) [2024-11-18T03:19:21.970Z] Copying: 395/1024 [MB] (11 MBps) [2024-11-18T03:19:23.357Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-18T03:19:24.293Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-18T03:19:25.234Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-18T03:19:26.173Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-18T03:19:27.113Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-18T03:19:28.054Z] Copying: 463/1024 [MB] (11 MBps) [2024-11-18T03:19:28.995Z] Copying: 474/1024 [MB] (10 MBps) [2024-11-18T03:19:30.377Z] Copying: 485/1024 [MB] (10 MBps) [2024-11-18T03:19:30.943Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-18T03:19:32.318Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-18T03:19:33.258Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-18T03:19:34.192Z] Copying: 537/1024 [MB] (18 MBps) [2024-11-18T03:19:35.131Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-18T03:19:36.067Z] Copying: 560/1024 [MB] (11 MBps) [2024-11-18T03:19:37.001Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-18T03:19:38.376Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-18T03:19:39.309Z] Copying: 595/1024 [MB] (12 MBps) [2024-11-18T03:19:40.243Z] Copying: 606/1024 [MB] (11 MBps) [2024-11-18T03:19:41.177Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-18T03:19:42.111Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-18T03:19:43.046Z] Copying: 642/1024 [MB] (12 MBps) [2024-11-18T03:19:43.982Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-18T03:19:45.360Z] Copying: 666/1024 [MB] (11 MBps) [2024-11-18T03:19:46.294Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-18T03:19:47.339Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-18T03:19:48.274Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-18T03:19:49.208Z] Copying: 712/1024 [MB] (11 MBps) [2024-11-18T03:19:50.149Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-18T03:19:51.091Z] Copying: 735/1024 [MB] (11 MBps) [2024-11-18T03:19:52.029Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-18T03:19:52.965Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-18T03:19:54.340Z] Copying: 768/1024 [MB] (11 MBps) [2024-11-18T03:19:55.277Z] Copying: 780/1024 [MB] (12 MBps) [2024-11-18T03:19:56.216Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-18T03:19:57.154Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-18T03:19:58.095Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-18T03:19:59.032Z] Copying: 825/1024 [MB] (11 MBps) [2024-11-18T03:19:59.972Z] Copying: 837/1024 [MB] (11 MBps) [2024-11-18T03:20:01.347Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-18T03:20:02.292Z] Copying: 859/1024 [MB] (11 MBps) [2024-11-18T03:20:03.227Z] Copying: 871/1024 [MB] (11 MBps) [2024-11-18T03:20:04.162Z] Copying: 882/1024 [MB] (11 MBps) [2024-11-18T03:20:05.104Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-18T03:20:06.045Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-18T03:20:06.982Z] Copying: 915/1024 [MB] (10 MBps) [2024-11-18T03:20:08.364Z] Copying: 927/1024 [MB] (11 MBps) [2024-11-18T03:20:09.301Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-18T03:20:10.242Z] Copying: 950/1024 [MB] (11 MBps) [2024-11-18T03:20:11.185Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-18T03:20:12.120Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-18T03:20:13.057Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-18T03:20:13.995Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-18T03:20:15.370Z] Copying: 1006/1024 [MB] (10 MBps) [2024-11-18T03:20:15.629Z] Copying: 1017/1024 [MB] (11 MBps) [2024-11-18T03:20:15.893Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-18 03:20:15.757238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.316 [2024-11-18 03:20:15.757303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:12.316 [2024-11-18 03:20:15.757326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:12.316 [2024-11-18 03:20:15.757334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.316 [2024-11-18 03:20:15.757357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.316 [2024-11-18 03:20:15.758015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.316 [2024-11-18 03:20:15.758041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:12.316 [2024-11-18 03:20:15.758050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:20:12.316 [2024-11-18 03:20:15.758057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.316 [2024-11-18 03:20:15.758237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.316 [2024-11-18 03:20:15.758246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:12.316 [2024-11-18 03:20:15.758253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:20:12.316 [2024-11-18 03:20:15.758260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.316 [2024-11-18 03:20:15.760836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.316 [2024-11-18 03:20:15.760859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:12.316 [2024-11-18 03:20:15.760867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:20:12.316 [2024-11-18 03:20:15.760874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.316 [2024-11-18 03:20:15.765482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.316 [2024-11-18 03:20:15.765508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:12.317 [2024-11-18 03:20:15.765517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:20:12.317 [2024-11-18 03:20:15.765524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.768336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.768373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:12.317 [2024-11-18 03:20:15.768382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:20:12.317 [2024-11-18 03:20:15.768388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.772695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.772725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:12.317 [2024-11-18 03:20:15.772734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:20:12.317 [2024-11-18 03:20:15.772740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.772827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.772843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:12.317 [2024-11-18 03:20:15.772850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:12.317 [2024-11-18 03:20:15.772856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.775508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.775535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:12.317 [2024-11-18 03:20:15.775542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:20:12.317 [2024-11-18 03:20:15.775548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.777892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.777917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:12.317 [2024-11-18 03:20:15.777924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.309 ms 00:20:12.317 [2024-11-18 03:20:15.777929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.779503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.779528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:12.317 [2024-11-18 03:20:15.779534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:20:12.317 [2024-11-18 03:20:15.779540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.781087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.317 [2024-11-18 03:20:15.781113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:12.317 [2024-11-18 03:20:15.781120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:20:12.317 [2024-11-18 03:20:15.781125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.317 [2024-11-18 03:20:15.781148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:12.317 [2024-11-18 03:20:15.781164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:12.317 [2024-11-18 03:20:15.781442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:12.318 [2024-11-18 03:20:15.781798] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:12.318 [2024-11-18 03:20:15.781805] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71c08846-8ea5-4e44-8d58-39f764438c15 00:20:12.318 [2024-11-18 03:20:15.781813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:12.318 [2024-11-18 03:20:15.781820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:12.318 [2024-11-18 03:20:15.781825] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:12.318 [2024-11-18 03:20:15.781831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:12.318 [2024-11-18 03:20:15.781837] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:12.318 [2024-11-18 03:20:15.781843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:12.318 [2024-11-18 03:20:15.781850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:12.318 [2024-11-18 03:20:15.781855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:12.318 [2024-11-18 03:20:15.781861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:12.318 [2024-11-18 03:20:15.781867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.318 [2024-11-18 03:20:15.781876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:12.318 [2024-11-18 03:20:15.781889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:20:12.318 [2024-11-18 03:20:15.781895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.318 [2024-11-18 03:20:15.784462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.318 [2024-11-18 03:20:15.784484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:12.318 [2024-11-18 03:20:15.784492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:20:12.318 [2024-11-18 03:20:15.784498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.318 [2024-11-18 03:20:15.784586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.318 [2024-11-18 03:20:15.784600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:12.318 [2024-11-18 03:20:15.784606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:12.318 [2024-11-18 03:20:15.784612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.318 [2024-11-18 03:20:15.789845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.318 [2024-11-18 03:20:15.789873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.318 [2024-11-18 03:20:15.789881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.318 [2024-11-18 03:20:15.789887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.318 [2024-11-18 03:20:15.789932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.318 [2024-11-18 03:20:15.789943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.318 [2024-11-18 03:20:15.789950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.318 [2024-11-18 03:20:15.789956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.318 [2024-11-18 03:20:15.789997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.318 [2024-11-18 03:20:15.790005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.318 [2024-11-18 03:20:15.790012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.790017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.790030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.790036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.319 [2024-11-18 03:20:15.790046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.790058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.800590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.800623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.319 [2024-11-18 03:20:15.800631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.800638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.319 [2024-11-18 03:20:15.809250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.319 [2024-11-18 03:20:15.809309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.319 [2024-11-18 03:20:15.809361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.319 [2024-11-18 03:20:15.809446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:12.319 [2024-11-18 03:20:15.809490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.319 [2024-11-18 03:20:15.809554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.319 [2024-11-18 03:20:15.809610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.319 [2024-11-18 03:20:15.809617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.319 [2024-11-18 03:20:15.809625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.319 [2024-11-18 03:20:15.809733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.469 ms, result 0 00:20:12.616 00:20:12.617 00:20:12.617 03:20:16 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:15.183 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:15.183 03:20:18 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:15.183 [2024-11-18 03:20:18.292133] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:15.183 [2024-11-18 03:20:18.292284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87958 ] 00:20:15.183 [2024-11-18 03:20:18.442748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.183 [2024-11-18 03:20:18.496568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.183 [2024-11-18 03:20:18.597350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.183 [2024-11-18 03:20:18.597404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:15.183 [2024-11-18 03:20:18.752009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.183 [2024-11-18 03:20:18.752043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.183 [2024-11-18 03:20:18.752058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.183 [2024-11-18 03:20:18.752064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.183 [2024-11-18 03:20:18.752104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.183 [2024-11-18 03:20:18.752113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.183 [2024-11-18 03:20:18.752119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:15.183 [2024-11-18 03:20:18.752126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.183 [2024-11-18 03:20:18.752142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.183 [2024-11-18 03:20:18.752335] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.183 [2024-11-18 03:20:18.752348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.183 [2024-11-18 03:20:18.752355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.183 [2024-11-18 03:20:18.752363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:15.183 [2024-11-18 03:20:18.752371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.183 [2024-11-18 03:20:18.753629] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:15.183 [2024-11-18 03:20:18.756490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.183 [2024-11-18 03:20:18.756519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:15.183 [2024-11-18 03:20:18.756528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:20:15.183 [2024-11-18 03:20:18.756534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.183 [2024-11-18 03:20:18.756584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.183 [2024-11-18 03:20:18.756592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:15.183 [2024-11-18 03:20:18.756599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:15.183 [2024-11-18 03:20:18.756610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.442 [2024-11-18 03:20:18.762864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.442 [2024-11-18 03:20:18.762889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.442 [2024-11-18 03:20:18.762896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:20:15.442 [2024-11-18 03:20:18.762907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.442 [2024-11-18 03:20:18.762978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.442 [2024-11-18 03:20:18.762985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.442 [2024-11-18 03:20:18.762992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:15.442 [2024-11-18 03:20:18.763002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.442 [2024-11-18 03:20:18.763037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.442 [2024-11-18 03:20:18.763048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.442 [2024-11-18 03:20:18.763059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:15.442 [2024-11-18 03:20:18.763065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.442 [2024-11-18 03:20:18.763085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.442 [2024-11-18 03:20:18.764636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.442 [2024-11-18 03:20:18.764658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.442 [2024-11-18 03:20:18.764665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:20:15.442 [2024-11-18 03:20:18.764671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.442 [2024-11-18 03:20:18.764697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.442 [2024-11-18 03:20:18.764703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.442 [2024-11-18 03:20:18.764710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:15.442 [2024-11-18 03:20:18.764716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.442 [2024-11-18 03:20:18.764736] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:15.442 [2024-11-18 03:20:18.764756] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:15.442 [2024-11-18 03:20:18.764787] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:15.442 [2024-11-18 03:20:18.764800] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:15.443 [2024-11-18 03:20:18.764884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:15.443 [2024-11-18 03:20:18.764893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.443 [2024-11-18 03:20:18.764901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:15.443 [2024-11-18 03:20:18.764910] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.443 [2024-11-18 03:20:18.764920] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.443 [2024-11-18 03:20:18.764926] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:15.443 [2024-11-18 03:20:18.764932] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.443 [2024-11-18 03:20:18.764941] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:15.443 [2024-11-18 03:20:18.764947] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:15.443 [2024-11-18 03:20:18.764953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.443 [2024-11-18 03:20:18.764959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.443 [2024-11-18 03:20:18.764968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:20:15.443 [2024-11-18 03:20:18.764974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.443 [2024-11-18 03:20:18.765041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.443 [2024-11-18 03:20:18.765052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.443 [2024-11-18 03:20:18.765058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:15.443 [2024-11-18 03:20:18.765064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.443 [2024-11-18 03:20:18.765144] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.443 [2024-11-18 03:20:18.765158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.443 [2024-11-18 03:20:18.765168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.443 [2024-11-18 03:20:18.765194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.443 [2024-11-18 03:20:18.765211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.443 [2024-11-18 03:20:18.765222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.443 [2024-11-18 03:20:18.765228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:15.443 [2024-11-18 03:20:18.765236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.443 [2024-11-18 03:20:18.765242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.443 [2024-11-18 03:20:18.765248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:15.443 [2024-11-18 03:20:18.765253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.443 [2024-11-18 03:20:18.765264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.443 [2024-11-18 03:20:18.765281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.443 [2024-11-18 03:20:18.765300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.443 [2024-11-18 03:20:18.765330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.443 [2024-11-18 03:20:18.765355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.443 [2024-11-18 03:20:18.765373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.443 [2024-11-18 03:20:18.765387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.443 [2024-11-18 03:20:18.765393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:15.443 [2024-11-18 03:20:18.765399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.443 [2024-11-18 03:20:18.765405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:15.443 [2024-11-18 03:20:18.765412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:15.443 [2024-11-18 03:20:18.765418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:15.443 [2024-11-18 03:20:18.765430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:15.443 [2024-11-18 03:20:18.765436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765444] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.443 [2024-11-18 03:20:18.765454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.443 [2024-11-18 03:20:18.765461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.443 [2024-11-18 03:20:18.765477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.443 [2024-11-18 03:20:18.765484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.443 [2024-11-18 03:20:18.765490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.443 [2024-11-18 03:20:18.765496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.443 [2024-11-18 03:20:18.765502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.443 [2024-11-18 03:20:18.765508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.443 [2024-11-18 03:20:18.765515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.443 [2024-11-18 03:20:18.765524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.443 [2024-11-18 03:20:18.765533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:15.443 [2024-11-18 03:20:18.765540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:15.443 [2024-11-18 03:20:18.765546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:15.443 [2024-11-18 03:20:18.765552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:15.443 [2024-11-18 03:20:18.765558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:15.443 [2024-11-18 03:20:18.765566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:15.444 [2024-11-18 03:20:18.765573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:15.444 [2024-11-18 03:20:18.765579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:15.444 [2024-11-18 03:20:18.765586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:15.444 [2024-11-18 03:20:18.765592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:15.444 [2024-11-18 03:20:18.765598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:15.444 [2024-11-18 03:20:18.765604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:15.444 [2024-11-18 03:20:18.765610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:15.444 [2024-11-18 03:20:18.765617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:15.444 [2024-11-18 03:20:18.765624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.444 [2024-11-18 03:20:18.765631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.444 [2024-11-18 03:20:18.765640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.444 [2024-11-18 03:20:18.765647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.444 [2024-11-18 03:20:18.765654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.444 [2024-11-18 03:20:18.765660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.444 [2024-11-18 03:20:18.765668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.765676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.444 [2024-11-18 03:20:18.765682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:20:15.444 [2024-11-18 03:20:18.765688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.786154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.786188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.444 [2024-11-18 03:20:18.786203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.431 ms 00:20:15.444 [2024-11-18 03:20:18.786213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.786288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.786295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.444 [2024-11-18 03:20:18.786301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:15.444 [2024-11-18 03:20:18.786307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.797995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.798036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.444 [2024-11-18 03:20:18.798050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.629 ms 00:20:15.444 [2024-11-18 03:20:18.798061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.798100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.798112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.444 [2024-11-18 03:20:18.798124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.444 [2024-11-18 03:20:18.798135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.798691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.798729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.444 [2024-11-18 03:20:18.798744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:20:15.444 [2024-11-18 03:20:18.798756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.798935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.798943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.444 [2024-11-18 03:20:18.798951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:20:15.444 [2024-11-18 03:20:18.798958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.804404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.804428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.444 [2024-11-18 03:20:18.804440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.428 ms 00:20:15.444 [2024-11-18 03:20:18.804446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.807402] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:15.444 [2024-11-18 03:20:18.807431] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:15.444 [2024-11-18 03:20:18.807443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.807450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:15.444 [2024-11-18 03:20:18.807456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.930 ms 00:20:15.444 [2024-11-18 03:20:18.807462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.819021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.819048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:15.444 [2024-11-18 03:20:18.819063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.527 ms 00:20:15.444 [2024-11-18 03:20:18.819069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.820797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.820822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:15.444 [2024-11-18 03:20:18.820829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:20:15.444 [2024-11-18 03:20:18.820834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.822781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.822808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:15.444 [2024-11-18 03:20:18.822816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.920 ms 00:20:15.444 [2024-11-18 03:20:18.822821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.823078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.823090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.444 [2024-11-18 03:20:18.823100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:15.444 [2024-11-18 03:20:18.823106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.841725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.841768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:15.444 [2024-11-18 03:20:18.841777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.607 ms 00:20:15.444 [2024-11-18 03:20:18.841787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.847632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:15.444 [2024-11-18 03:20:18.849752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.849776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.444 [2024-11-18 03:20:18.849790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.932 ms 00:20:15.444 [2024-11-18 03:20:18.849797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.849837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.849846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:15.444 [2024-11-18 03:20:18.849856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.444 [2024-11-18 03:20:18.849863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.849937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.849946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.444 [2024-11-18 03:20:18.849952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:15.444 [2024-11-18 03:20:18.849961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.444 [2024-11-18 03:20:18.849977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.444 [2024-11-18 03:20:18.849986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.444 [2024-11-18 03:20:18.849993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.445 [2024-11-18 03:20:18.849999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.445 [2024-11-18 03:20:18.850029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:15.445 [2024-11-18 03:20:18.850039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.445 [2024-11-18 03:20:18.850045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:15.445 [2024-11-18 03:20:18.850054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:15.445 [2024-11-18 03:20:18.850060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.445 [2024-11-18 03:20:18.853803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.445 [2024-11-18 03:20:18.853830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.445 [2024-11-18 03:20:18.853844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:20:15.445 [2024-11-18 03:20:18.853850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.445 [2024-11-18 03:20:18.853905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.445 [2024-11-18 03:20:18.853913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.445 [2024-11-18 03:20:18.853920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:15.445 [2024-11-18 03:20:18.853926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.445 [2024-11-18 03:20:18.854793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.418 ms, result 0 00:20:16.381  [2024-11-18T03:20:20.897Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T03:20:22.270Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-18T03:20:23.205Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-18T03:20:24.139Z] Copying: 44/1024 [MB] (11 MBps) [2024-11-18T03:20:25.073Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-18T03:20:26.005Z] Copying: 67/1024 [MB] (11 MBps) [2024-11-18T03:20:26.936Z] Copying: 78/1024 [MB] (11 MBps) [2024-11-18T03:20:27.876Z] Copying: 90/1024 [MB] (11 MBps) [2024-11-18T03:20:29.261Z] Copying: 101/1024 [MB] (11 MBps) [2024-11-18T03:20:30.201Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-18T03:20:31.143Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-18T03:20:32.087Z] Copying: 134/1024 [MB] (12 MBps) [2024-11-18T03:20:33.030Z] Copying: 158/1024 [MB] (23 MBps) [2024-11-18T03:20:33.974Z] Copying: 174/1024 [MB] (16 MBps) [2024-11-18T03:20:34.919Z] Copying: 190/1024 [MB] (15 MBps) [2024-11-18T03:20:36.308Z] Copying: 212/1024 [MB] (22 MBps) [2024-11-18T03:20:36.878Z] Copying: 227/1024 [MB] (15 MBps) [2024-11-18T03:20:38.267Z] Copying: 245/1024 [MB] (18 MBps) [2024-11-18T03:20:39.221Z] Copying: 267/1024 [MB] (21 MBps) [2024-11-18T03:20:40.162Z] Copying: 293/1024 [MB] (25 MBps) [2024-11-18T03:20:41.106Z] Copying: 310/1024 [MB] (17 MBps) [2024-11-18T03:20:42.052Z] Copying: 325/1024 [MB] (14 MBps) [2024-11-18T03:20:42.998Z] Copying: 340/1024 [MB] (14 MBps) [2024-11-18T03:20:43.941Z] Copying: 359/1024 [MB] (19 MBps) [2024-11-18T03:20:44.889Z] Copying: 387/1024 [MB] (27 MBps) [2024-11-18T03:20:45.875Z] Copying: 402/1024 [MB] (15 MBps) [2024-11-18T03:20:47.261Z] Copying: 425/1024 [MB] (22 MBps) [2024-11-18T03:20:48.206Z] Copying: 444/1024 [MB] (19 MBps) [2024-11-18T03:20:49.150Z] Copying: 460/1024 [MB] (15 MBps) [2024-11-18T03:20:50.093Z] Copying: 490/1024 [MB] (29 MBps) [2024-11-18T03:20:51.036Z] Copying: 511/1024 [MB] (21 MBps) [2024-11-18T03:20:51.979Z] Copying: 524/1024 [MB] (13 MBps) [2024-11-18T03:20:52.917Z] Copying: 547/1024 [MB] (22 MBps) [2024-11-18T03:20:54.303Z] Copying: 569/1024 [MB] (22 MBps) [2024-11-18T03:20:54.876Z] Copying: 588/1024 [MB] (19 MBps) [2024-11-18T03:20:56.264Z] Copying: 602/1024 [MB] (13 MBps) [2024-11-18T03:20:57.209Z] Copying: 619/1024 [MB] (17 MBps) [2024-11-18T03:20:58.153Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-18T03:20:59.096Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-18T03:21:00.040Z] Copying: 682/1024 [MB] (28 MBps) [2024-11-18T03:21:00.984Z] Copying: 697/1024 [MB] (14 MBps) [2024-11-18T03:21:01.930Z] Copying: 743/1024 [MB] (46 MBps) [2024-11-18T03:21:02.873Z] Copying: 764/1024 [MB] (21 MBps) [2024-11-18T03:21:04.259Z] Copying: 780/1024 [MB] (15 MBps) [2024-11-18T03:21:05.203Z] Copying: 799/1024 [MB] (19 MBps) [2024-11-18T03:21:06.147Z] Copying: 820/1024 [MB] (21 MBps) [2024-11-18T03:21:07.090Z] Copying: 840/1024 [MB] (19 MBps) [2024-11-18T03:21:08.034Z] Copying: 859/1024 [MB] (19 MBps) [2024-11-18T03:21:08.979Z] Copying: 880/1024 [MB] (20 MBps) [2024-11-18T03:21:09.921Z] Copying: 897/1024 [MB] (16 MBps) [2024-11-18T03:21:11.308Z] Copying: 913/1024 [MB] (16 MBps) [2024-11-18T03:21:11.881Z] Copying: 929/1024 [MB] (16 MBps) [2024-11-18T03:21:13.270Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-18T03:21:14.215Z] Copying: 956/1024 [MB] (14 MBps) [2024-11-18T03:21:15.191Z] Copying: 977/1024 [MB] (21 MBps) [2024-11-18T03:21:16.134Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-18T03:21:17.075Z] Copying: 1000/1024 [MB] (12 MBps) [2024-11-18T03:21:18.021Z] Copying: 1015/1024 [MB] (15 MBps) [2024-11-18T03:21:18.283Z] Copying: 1048160/1048576 [kB] (7788 kBps) [2024-11-18T03:21:18.283Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-18 03:21:18.257540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.706 [2024-11-18 03:21:18.257934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.706 [2024-11-18 03:21:18.257961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:14.706 [2024-11-18 03:21:18.257971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.706 [2024-11-18 03:21:18.262072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.706 [2024-11-18 03:21:18.263361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.706 [2024-11-18 03:21:18.263412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.706 [2024-11-18 03:21:18.263423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:21:14.706 [2024-11-18 03:21:18.263431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.706 [2024-11-18 03:21:18.275464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.706 [2024-11-18 03:21:18.275501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.706 [2024-11-18 03:21:18.275513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.827 ms 00:21:14.706 [2024-11-18 03:21:18.275521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.967 [2024-11-18 03:21:18.297000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.967 [2024-11-18 03:21:18.297042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.967 [2024-11-18 03:21:18.297053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.462 ms 00:21:14.967 [2024-11-18 03:21:18.297061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.967 [2024-11-18 03:21:18.303164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.967 [2024-11-18 03:21:18.303194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.967 [2024-11-18 03:21:18.303203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.076 ms 00:21:14.967 [2024-11-18 03:21:18.303215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.967 [2024-11-18 03:21:18.305138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.967 [2024-11-18 03:21:18.305165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.968 [2024-11-18 03:21:18.305175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:21:14.968 [2024-11-18 03:21:18.305182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.968 [2024-11-18 03:21:18.309222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.968 [2024-11-18 03:21:18.309250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.968 [2024-11-18 03:21:18.309259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:21:14.968 [2024-11-18 03:21:18.309273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.230 [2024-11-18 03:21:18.563694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.230 [2024-11-18 03:21:18.563737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:15.230 [2024-11-18 03:21:18.563758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 254.381 ms 00:21:15.230 [2024-11-18 03:21:18.563766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.230 [2024-11-18 03:21:18.565717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.230 [2024-11-18 03:21:18.565746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:15.230 [2024-11-18 03:21:18.565755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.935 ms 00:21:15.230 [2024-11-18 03:21:18.565762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.230 [2024-11-18 03:21:18.567254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.230 [2024-11-18 03:21:18.567283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:15.230 [2024-11-18 03:21:18.567292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:21:15.230 [2024-11-18 03:21:18.567299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.230 [2024-11-18 03:21:18.568536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.230 [2024-11-18 03:21:18.568562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:15.230 [2024-11-18 03:21:18.568571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:21:15.231 [2024-11-18 03:21:18.568578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.231 [2024-11-18 03:21:18.569726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.231 [2024-11-18 03:21:18.569751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:15.231 [2024-11-18 03:21:18.569759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:21:15.231 [2024-11-18 03:21:18.569766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.231 [2024-11-18 03:21:18.569792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:15.231 [2024-11-18 03:21:18.569806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107776 / 261120 wr_cnt: 1 state: open 00:21:15.231 [2024-11-18 03:21:18.569815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.569998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:15.231 [2024-11-18 03:21:18.570415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:15.232 [2024-11-18 03:21:18.570604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:15.232 [2024-11-18 03:21:18.570612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71c08846-8ea5-4e44-8d58-39f764438c15 00:21:15.232 [2024-11-18 03:21:18.570620] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107776 00:21:15.232 [2024-11-18 03:21:18.570627] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108736 00:21:15.232 [2024-11-18 03:21:18.570636] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107776 00:21:15.232 [2024-11-18 03:21:18.570649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:21:15.232 [2024-11-18 03:21:18.570656] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:15.232 [2024-11-18 03:21:18.570664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:15.232 [2024-11-18 03:21:18.570671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:15.232 [2024-11-18 03:21:18.570677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:15.232 [2024-11-18 03:21:18.570683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:15.232 [2024-11-18 03:21:18.570690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.232 [2024-11-18 03:21:18.570697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:15.232 [2024-11-18 03:21:18.570709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:21:15.232 [2024-11-18 03:21:18.570717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.572160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.232 [2024-11-18 03:21:18.572182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:15.232 [2024-11-18 03:21:18.572190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:21:15.232 [2024-11-18 03:21:18.572197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.572276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.232 [2024-11-18 03:21:18.572284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:15.232 [2024-11-18 03:21:18.572293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:15.232 [2024-11-18 03:21:18.572300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.576839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.576859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:15.232 [2024-11-18 03:21:18.576868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.576876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.576924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.576937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:15.232 [2024-11-18 03:21:18.576945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.576952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.576998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.577010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:15.232 [2024-11-18 03:21:18.577018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.577026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.577042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.577050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:15.232 [2024-11-18 03:21:18.577057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.577065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.586117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.586157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.232 [2024-11-18 03:21:18.586167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.586175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.593611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.593646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.232 [2024-11-18 03:21:18.593656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.593663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.593687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.593694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.232 [2024-11-18 03:21:18.593707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.593720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.593759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.593771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.232 [2024-11-18 03:21:18.593779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.593786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.593850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.593859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.232 [2024-11-18 03:21:18.593869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.593877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.593903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.593912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:15.232 [2024-11-18 03:21:18.593919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.593927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.593962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.593977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.232 [2024-11-18 03:21:18.593984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.593994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.594039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.232 [2024-11-18 03:21:18.594048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.232 [2024-11-18 03:21:18.594056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.232 [2024-11-18 03:21:18.594064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.232 [2024-11-18 03:21:18.594179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.077 ms, result 0 00:21:15.804 00:21:15.804 00:21:15.804 03:21:19 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:16.066 [2024-11-18 03:21:19.417788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:16.066 [2024-11-18 03:21:19.417938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88586 ] 00:21:16.066 [2024-11-18 03:21:19.568357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.066 [2024-11-18 03:21:19.619260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.327 [2024-11-18 03:21:19.733627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.327 [2024-11-18 03:21:19.733710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:16.327 [2024-11-18 03:21:19.894812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.327 [2024-11-18 03:21:19.895029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:16.327 [2024-11-18 03:21:19.895059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:16.327 [2024-11-18 03:21:19.895068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.327 [2024-11-18 03:21:19.895139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.327 [2024-11-18 03:21:19.895150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.327 [2024-11-18 03:21:19.895163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:16.327 [2024-11-18 03:21:19.895174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.327 [2024-11-18 03:21:19.895196] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:16.327 [2024-11-18 03:21:19.895483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:16.327 [2024-11-18 03:21:19.895501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.327 [2024-11-18 03:21:19.895509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.327 [2024-11-18 03:21:19.895518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:21:16.327 [2024-11-18 03:21:19.895529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.327 [2024-11-18 03:21:19.897164] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:16.590 [2024-11-18 03:21:19.900870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.900919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:16.590 [2024-11-18 03:21:19.900938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.708 ms 00:21:16.590 [2024-11-18 03:21:19.900947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.901024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.901035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:16.590 [2024-11-18 03:21:19.901044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:16.590 [2024-11-18 03:21:19.901052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.909127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.909299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.590 [2024-11-18 03:21:19.909713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.030 ms 00:21:16.590 [2024-11-18 03:21:19.909749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.909865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.909881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.590 [2024-11-18 03:21:19.909891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:16.590 [2024-11-18 03:21:19.909902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.909973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.909984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:16.590 [2024-11-18 03:21:19.909998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:16.590 [2024-11-18 03:21:19.910006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.910036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:16.590 [2024-11-18 03:21:19.912216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.912381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.590 [2024-11-18 03:21:19.912799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.193 ms 00:21:16.590 [2024-11-18 03:21:19.912826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.912877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.912887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:16.590 [2024-11-18 03:21:19.912897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:16.590 [2024-11-18 03:21:19.912904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.912930] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:16.590 [2024-11-18 03:21:19.912961] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:16.590 [2024-11-18 03:21:19.912998] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:16.590 [2024-11-18 03:21:19.913023] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:16.590 [2024-11-18 03:21:19.913134] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:16.590 [2024-11-18 03:21:19.913149] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:16.590 [2024-11-18 03:21:19.913160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:16.590 [2024-11-18 03:21:19.913171] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913191] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:16.590 [2024-11-18 03:21:19.913199] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:16.590 [2024-11-18 03:21:19.913206] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:16.590 [2024-11-18 03:21:19.913213] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:16.590 [2024-11-18 03:21:19.913220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.913228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:16.590 [2024-11-18 03:21:19.913236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:16.590 [2024-11-18 03:21:19.913245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.913349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.590 [2024-11-18 03:21:19.913362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:16.590 [2024-11-18 03:21:19.913376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:16.590 [2024-11-18 03:21:19.913384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.590 [2024-11-18 03:21:19.913488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:16.590 [2024-11-18 03:21:19.913499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:16.590 [2024-11-18 03:21:19.913508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:16.590 [2024-11-18 03:21:19.913541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:16.590 [2024-11-18 03:21:19.913562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.590 [2024-11-18 03:21:19.913577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:16.590 [2024-11-18 03:21:19.913584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:16.590 [2024-11-18 03:21:19.913590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:16.590 [2024-11-18 03:21:19.913598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:16.590 [2024-11-18 03:21:19.913611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:16.590 [2024-11-18 03:21:19.913619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:16.590 [2024-11-18 03:21:19.913633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:16.590 [2024-11-18 03:21:19.913653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:16.590 [2024-11-18 03:21:19.913674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:16.590 [2024-11-18 03:21:19.913693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:16.590 [2024-11-18 03:21:19.913700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.590 [2024-11-18 03:21:19.913707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:16.590 [2024-11-18 03:21:19.913714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:16.590 [2024-11-18 03:21:19.914001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:16.590 [2024-11-18 03:21:19.914009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:16.590 [2024-11-18 03:21:19.914017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:16.591 [2024-11-18 03:21:19.914024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.591 [2024-11-18 03:21:19.914031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:16.591 [2024-11-18 03:21:19.914037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:16.591 [2024-11-18 03:21:19.914045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:16.591 [2024-11-18 03:21:19.914051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:16.591 [2024-11-18 03:21:19.914058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:16.591 [2024-11-18 03:21:19.914065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.591 [2024-11-18 03:21:19.914072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:16.591 [2024-11-18 03:21:19.914079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:16.591 [2024-11-18 03:21:19.914086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.591 [2024-11-18 03:21:19.914092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:16.591 [2024-11-18 03:21:19.914102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:16.591 [2024-11-18 03:21:19.914111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:16.591 [2024-11-18 03:21:19.914124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:16.591 [2024-11-18 03:21:19.914132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:16.591 [2024-11-18 03:21:19.914139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:16.591 [2024-11-18 03:21:19.914146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:16.591 [2024-11-18 03:21:19.914154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:16.591 [2024-11-18 03:21:19.914160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:16.591 [2024-11-18 03:21:19.914167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:16.591 [2024-11-18 03:21:19.914176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:16.591 [2024-11-18 03:21:19.914186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.914194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:16.591 [2024-11-18 03:21:19.914201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:16.591 [2024-11-18 03:21:19.914208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:16.591 [2024-11-18 03:21:19.914215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:16.591 [2024-11-18 03:21:19.914222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:16.591 [2024-11-18 03:21:19.914230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:16.591 [2024-11-18 03:21:19.914238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:16.591 [2024-11-18 03:21:19.914247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:16.591 [2024-11-18 03:21:19.914254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:16.591 [2024-11-18 03:21:19.914261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.914268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.914275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.914283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.914290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:16.591 [2024-11-18 03:21:19.914298] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:16.591 [2024-11-18 03:21:19.914306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.915017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:16.591 [2024-11-18 03:21:19.915054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:16.591 [2024-11-18 03:21:19.915085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:16.591 [2024-11-18 03:21:19.915114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:16.591 [2024-11-18 03:21:19.915146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.915167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:16.591 [2024-11-18 03:21:19.915191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:21:16.591 [2024-11-18 03:21:19.915222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.940087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.940306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.591 [2024-11-18 03:21:19.940507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.660 ms 00:21:16.591 [2024-11-18 03:21:19.940554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.940683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.940973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:16.591 [2024-11-18 03:21:19.940992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:16.591 [2024-11-18 03:21:19.941001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.953051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.953099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.591 [2024-11-18 03:21:19.953110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.975 ms 00:21:16.591 [2024-11-18 03:21:19.953118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.953152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.953166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.591 [2024-11-18 03:21:19.953174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:16.591 [2024-11-18 03:21:19.953182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.953773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.953815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.591 [2024-11-18 03:21:19.953830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:21:16.591 [2024-11-18 03:21:19.953839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.953989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.953999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.591 [2024-11-18 03:21:19.954009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:16.591 [2024-11-18 03:21:19.954018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.960736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.960780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.591 [2024-11-18 03:21:19.960796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:21:16.591 [2024-11-18 03:21:19.960805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.964496] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:16.591 [2024-11-18 03:21:19.964545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:16.591 [2024-11-18 03:21:19.964559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.964567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:16.591 [2024-11-18 03:21:19.964582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:21:16.591 [2024-11-18 03:21:19.964589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.980286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.980350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:16.591 [2024-11-18 03:21:19.980369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.647 ms 00:21:16.591 [2024-11-18 03:21:19.980377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.983296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.983357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:16.591 [2024-11-18 03:21:19.983367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:21:16.591 [2024-11-18 03:21:19.983374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.985808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.985849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:16.591 [2024-11-18 03:21:19.985859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:21:16.591 [2024-11-18 03:21:19.985866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:19.986204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:19.986216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:16.591 [2024-11-18 03:21:19.986228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:16.591 [2024-11-18 03:21:19.986236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.591 [2024-11-18 03:21:20.010413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.591 [2024-11-18 03:21:20.010508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:16.591 [2024-11-18 03:21:20.010524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.160 ms 00:21:16.591 [2024-11-18 03:21:20.010533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.019037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:16.592 [2024-11-18 03:21:20.022429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.022471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:16.592 [2024-11-18 03:21:20.022519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.842 ms 00:21:16.592 [2024-11-18 03:21:20.022528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.022614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.022625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:16.592 [2024-11-18 03:21:20.022639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:16.592 [2024-11-18 03:21:20.022647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.024527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.024568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:16.592 [2024-11-18 03:21:20.024587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:21:16.592 [2024-11-18 03:21:20.024599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.024627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.024636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:16.592 [2024-11-18 03:21:20.024645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:16.592 [2024-11-18 03:21:20.024653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.024694] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:16.592 [2024-11-18 03:21:20.024704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.024714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:16.592 [2024-11-18 03:21:20.024722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:16.592 [2024-11-18 03:21:20.024730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.030058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.030114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:16.592 [2024-11-18 03:21:20.030126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:21:16.592 [2024-11-18 03:21:20.030134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.030218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.592 [2024-11-18 03:21:20.030229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:16.592 [2024-11-18 03:21:20.030238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:16.592 [2024-11-18 03:21:20.030246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.592 [2024-11-18 03:21:20.031612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.305 ms, result 0 00:21:17.980  [2024-11-18T03:21:22.500Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-18T03:21:23.443Z] Copying: 39/1024 [MB] (22 MBps) [2024-11-18T03:21:24.387Z] Copying: 62/1024 [MB] (23 MBps) [2024-11-18T03:21:25.326Z] Copying: 82/1024 [MB] (20 MBps) [2024-11-18T03:21:26.269Z] Copying: 106/1024 [MB] (24 MBps) [2024-11-18T03:21:27.656Z] Copying: 124/1024 [MB] (17 MBps) [2024-11-18T03:21:28.229Z] Copying: 145/1024 [MB] (20 MBps) [2024-11-18T03:21:29.617Z] Copying: 164/1024 [MB] (19 MBps) [2024-11-18T03:21:30.562Z] Copying: 184/1024 [MB] (20 MBps) [2024-11-18T03:21:31.507Z] Copying: 202/1024 [MB] (17 MBps) [2024-11-18T03:21:32.451Z] Copying: 217/1024 [MB] (15 MBps) [2024-11-18T03:21:33.393Z] Copying: 232/1024 [MB] (15 MBps) [2024-11-18T03:21:34.336Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-18T03:21:35.280Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-18T03:21:36.225Z] Copying: 264/1024 [MB] (10 MBps) [2024-11-18T03:21:37.614Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-18T03:21:38.558Z] Copying: 285/1024 [MB] (10 MBps) [2024-11-18T03:21:39.502Z] Copying: 307/1024 [MB] (21 MBps) [2024-11-18T03:21:40.445Z] Copying: 318/1024 [MB] (11 MBps) [2024-11-18T03:21:41.390Z] Copying: 335/1024 [MB] (16 MBps) [2024-11-18T03:21:42.338Z] Copying: 353/1024 [MB] (17 MBps) [2024-11-18T03:21:43.333Z] Copying: 373/1024 [MB] (20 MBps) [2024-11-18T03:21:44.277Z] Copying: 393/1024 [MB] (20 MBps) [2024-11-18T03:21:45.665Z] Copying: 418/1024 [MB] (25 MBps) [2024-11-18T03:21:46.236Z] Copying: 432/1024 [MB] (14 MBps) [2024-11-18T03:21:47.623Z] Copying: 448/1024 [MB] (16 MBps) [2024-11-18T03:21:48.567Z] Copying: 470/1024 [MB] (22 MBps) [2024-11-18T03:21:49.510Z] Copying: 489/1024 [MB] (18 MBps) [2024-11-18T03:21:50.455Z] Copying: 511/1024 [MB] (22 MBps) [2024-11-18T03:21:51.401Z] Copying: 529/1024 [MB] (17 MBps) [2024-11-18T03:21:52.346Z] Copying: 546/1024 [MB] (17 MBps) [2024-11-18T03:21:53.291Z] Copying: 558/1024 [MB] (11 MBps) [2024-11-18T03:21:54.236Z] Copying: 575/1024 [MB] (17 MBps) [2024-11-18T03:21:55.626Z] Copying: 597/1024 [MB] (21 MBps) [2024-11-18T03:21:56.569Z] Copying: 609/1024 [MB] (12 MBps) [2024-11-18T03:21:57.514Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-18T03:21:58.458Z] Copying: 630/1024 [MB] (10 MBps) [2024-11-18T03:21:59.420Z] Copying: 646/1024 [MB] (15 MBps) [2024-11-18T03:22:00.368Z] Copying: 661/1024 [MB] (14 MBps) [2024-11-18T03:22:01.311Z] Copying: 681/1024 [MB] (20 MBps) [2024-11-18T03:22:02.254Z] Copying: 696/1024 [MB] (14 MBps) [2024-11-18T03:22:03.636Z] Copying: 712/1024 [MB] (16 MBps) [2024-11-18T03:22:04.578Z] Copying: 731/1024 [MB] (19 MBps) [2024-11-18T03:22:05.517Z] Copying: 747/1024 [MB] (15 MBps) [2024-11-18T03:22:06.461Z] Copying: 768/1024 [MB] (20 MBps) [2024-11-18T03:22:07.406Z] Copying: 782/1024 [MB] (14 MBps) [2024-11-18T03:22:08.357Z] Copying: 801/1024 [MB] (19 MBps) [2024-11-18T03:22:09.298Z] Copying: 816/1024 [MB] (15 MBps) [2024-11-18T03:22:10.242Z] Copying: 830/1024 [MB] (13 MBps) [2024-11-18T03:22:11.633Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-18T03:22:12.288Z] Copying: 855/1024 [MB] (12 MBps) [2024-11-18T03:22:13.231Z] Copying: 873/1024 [MB] (18 MBps) [2024-11-18T03:22:14.619Z] Copying: 894/1024 [MB] (20 MBps) [2024-11-18T03:22:15.566Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-18T03:22:16.510Z] Copying: 923/1024 [MB] (18 MBps) [2024-11-18T03:22:17.454Z] Copying: 937/1024 [MB] (14 MBps) [2024-11-18T03:22:18.398Z] Copying: 955/1024 [MB] (17 MBps) [2024-11-18T03:22:19.353Z] Copying: 972/1024 [MB] (17 MBps) [2024-11-18T03:22:20.298Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-18T03:22:21.243Z] Copying: 999/1024 [MB] (15 MBps) [2024-11-18T03:22:22.188Z] Copying: 1016/1024 [MB] (16 MBps) [2024-11-18T03:22:22.449Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 03:22:22.270147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.270560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:18.872 [2024-11-18 03:22:22.270667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.872 [2024-11-18 03:22:22.270699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.872 [2024-11-18 03:22:22.270752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:18.872 [2024-11-18 03:22:22.272033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.272204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:18.872 [2024-11-18 03:22:22.272268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:22:18.872 [2024-11-18 03:22:22.272281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.872 [2024-11-18 03:22:22.272560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.272573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:18.872 [2024-11-18 03:22:22.272583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:22:18.872 [2024-11-18 03:22:22.272591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.872 [2024-11-18 03:22:22.279178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.279369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:18.872 [2024-11-18 03:22:22.279440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.568 ms 00:22:18.872 [2024-11-18 03:22:22.279464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.872 [2024-11-18 03:22:22.286400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.286878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:18.872 [2024-11-18 03:22:22.287426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:22:18.872 [2024-11-18 03:22:22.287485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.872 [2024-11-18 03:22:22.291248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.291484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:18.872 [2024-11-18 03:22:22.291562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.646 ms 00:22:18.872 [2024-11-18 03:22:22.291589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.872 [2024-11-18 03:22:22.296555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.872 [2024-11-18 03:22:22.296733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:18.872 [2024-11-18 03:22:22.296791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:22:18.872 [2024-11-18 03:22:22.296816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.135 [2024-11-18 03:22:22.659060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.135 [2024-11-18 03:22:22.659282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:19.135 [2024-11-18 03:22:22.659385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 362.047 ms 00:22:19.135 [2024-11-18 03:22:22.659413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.135 [2024-11-18 03:22:22.662863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.135 [2024-11-18 03:22:22.663039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:19.135 [2024-11-18 03:22:22.663109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:22:19.135 [2024-11-18 03:22:22.663134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.135 [2024-11-18 03:22:22.666156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.135 [2024-11-18 03:22:22.666347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:19.135 [2024-11-18 03:22:22.666366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:22:19.135 [2024-11-18 03:22:22.666375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.135 [2024-11-18 03:22:22.668857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.135 [2024-11-18 03:22:22.668891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:19.135 [2024-11-18 03:22:22.668902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:22:19.135 [2024-11-18 03:22:22.668909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.135 [2024-11-18 03:22:22.671221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.135 [2024-11-18 03:22:22.671397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:19.135 [2024-11-18 03:22:22.671500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:22:19.135 [2024-11-18 03:22:22.671524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.135 [2024-11-18 03:22:22.671571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:19.135 [2024-11-18 03:22:22.671604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:19.135 [2024-11-18 03:22:22.671694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.671991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:19.135 [2024-11-18 03:22:22.672020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.672908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.673981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:19.136 [2024-11-18 03:22:22.674890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.674919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.674948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.674976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:19.137 [2024-11-18 03:22:22.675251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:19.137 [2024-11-18 03:22:22.675376] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 71c08846-8ea5-4e44-8d58-39f764438c15 00:22:19.137 [2024-11-18 03:22:22.675407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:19.137 [2024-11-18 03:22:22.675427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24256 00:22:19.137 [2024-11-18 03:22:22.675446] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23296 00:22:19.137 [2024-11-18 03:22:22.675481] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0412 00:22:19.137 [2024-11-18 03:22:22.675500] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:19.137 [2024-11-18 03:22:22.675564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:19.137 [2024-11-18 03:22:22.675692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:19.137 [2024-11-18 03:22:22.675715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:19.137 [2024-11-18 03:22:22.675733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:19.137 [2024-11-18 03:22:22.675754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.137 [2024-11-18 03:22:22.675790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:19.137 [2024-11-18 03:22:22.675879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:22:19.137 [2024-11-18 03:22:22.675902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.678345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.137 [2024-11-18 03:22:22.678519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:19.137 [2024-11-18 03:22:22.678583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:22:19.137 [2024-11-18 03:22:22.678615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.678789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.137 [2024-11-18 03:22:22.678869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:19.137 [2024-11-18 03:22:22.678915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:19.137 [2024-11-18 03:22:22.678947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.686010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.137 [2024-11-18 03:22:22.686162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:19.137 [2024-11-18 03:22:22.686215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.137 [2024-11-18 03:22:22.686239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.686435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.137 [2024-11-18 03:22:22.686464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:19.137 [2024-11-18 03:22:22.686497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.137 [2024-11-18 03:22:22.686523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.686589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.137 [2024-11-18 03:22:22.686683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:19.137 [2024-11-18 03:22:22.686708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.137 [2024-11-18 03:22:22.686718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.686736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.137 [2024-11-18 03:22:22.686744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:19.137 [2024-11-18 03:22:22.686752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.137 [2024-11-18 03:22:22.686760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.137 [2024-11-18 03:22:22.699914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.137 [2024-11-18 03:22:22.699951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:19.137 [2024-11-18 03:22:22.699962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.137 [2024-11-18 03:22:22.699970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:19.399 [2024-11-18 03:22:22.710105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:19.399 [2024-11-18 03:22:22.710192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:19.399 [2024-11-18 03:22:22.710261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:19.399 [2024-11-18 03:22:22.710381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:19.399 [2024-11-18 03:22:22.710438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:19.399 [2024-11-18 03:22:22.710521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:19.399 [2024-11-18 03:22:22.710593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:19.399 [2024-11-18 03:22:22.710603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:19.399 [2024-11-18 03:22:22.710614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.399 [2024-11-18 03:22:22.710748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.567 ms, result 0 00:22:19.399 00:22:19.399 00:22:19.399 03:22:22 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:21.943 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:21.943 Process with pid 86166 is not found 00:22:21.943 Remove shared memory files 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86166 00:22:21.943 03:22:25 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86166 ']' 00:22:21.943 03:22:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86166 00:22:21.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86166) - No such process 00:22:21.943 03:22:25 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86166 is not found' 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:21.943 03:22:25 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:21.943 ************************************ 00:22:21.943 END TEST ftl_restore 00:22:21.943 ************************************ 00:22:21.943 00:22:21.943 real 4m57.592s 00:22:21.943 user 4m45.257s 00:22:21.943 sys 0m12.100s 00:22:21.943 03:22:25 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:21.943 03:22:25 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:21.943 03:22:25 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:21.943 03:22:25 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:21.943 03:22:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:21.943 03:22:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:21.943 ************************************ 00:22:21.943 START TEST ftl_dirty_shutdown 00:22:21.943 ************************************ 00:22:21.943 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:21.943 * Looking for test storage... 00:22:21.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:21.943 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:21.943 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:21.943 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:22.204 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:22.204 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.204 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.204 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.204 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.204 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:22.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.205 --rc genhtml_branch_coverage=1 00:22:22.205 --rc genhtml_function_coverage=1 00:22:22.205 --rc genhtml_legend=1 00:22:22.205 --rc geninfo_all_blocks=1 00:22:22.205 --rc geninfo_unexecuted_blocks=1 00:22:22.205 00:22:22.205 ' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:22.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.205 --rc genhtml_branch_coverage=1 00:22:22.205 --rc genhtml_function_coverage=1 00:22:22.205 --rc genhtml_legend=1 00:22:22.205 --rc geninfo_all_blocks=1 00:22:22.205 --rc geninfo_unexecuted_blocks=1 00:22:22.205 00:22:22.205 ' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:22.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.205 --rc genhtml_branch_coverage=1 00:22:22.205 --rc genhtml_function_coverage=1 00:22:22.205 --rc genhtml_legend=1 00:22:22.205 --rc geninfo_all_blocks=1 00:22:22.205 --rc geninfo_unexecuted_blocks=1 00:22:22.205 00:22:22.205 ' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:22.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.205 --rc genhtml_branch_coverage=1 00:22:22.205 --rc genhtml_function_coverage=1 00:22:22.205 --rc genhtml_legend=1 00:22:22.205 --rc geninfo_all_blocks=1 00:22:22.205 --rc geninfo_unexecuted_blocks=1 00:22:22.205 00:22:22.205 ' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89336 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89336 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89336 ']' 00:22:22.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.205 03:22:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:22.205 [2024-11-18 03:22:25.647570] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:22.205 [2024-11-18 03:22:25.647961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89336 ] 00:22:22.466 [2024-11-18 03:22:25.804952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.466 [2024-11-18 03:22:25.856868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:23.038 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:23.299 03:22:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:23.561 { 00:22:23.561 "name": "nvme0n1", 00:22:23.561 "aliases": [ 00:22:23.561 "a377f910-ae84-43f8-a801-620ae480d63a" 00:22:23.561 ], 00:22:23.561 "product_name": "NVMe disk", 00:22:23.561 "block_size": 4096, 00:22:23.561 "num_blocks": 1310720, 00:22:23.561 "uuid": "a377f910-ae84-43f8-a801-620ae480d63a", 00:22:23.561 "numa_id": -1, 00:22:23.561 "assigned_rate_limits": { 00:22:23.561 "rw_ios_per_sec": 0, 00:22:23.561 "rw_mbytes_per_sec": 0, 00:22:23.561 "r_mbytes_per_sec": 0, 00:22:23.561 "w_mbytes_per_sec": 0 00:22:23.561 }, 00:22:23.561 "claimed": true, 00:22:23.561 "claim_type": "read_many_write_one", 00:22:23.561 "zoned": false, 00:22:23.561 "supported_io_types": { 00:22:23.561 "read": true, 00:22:23.561 "write": true, 00:22:23.561 "unmap": true, 00:22:23.561 "flush": true, 00:22:23.561 "reset": true, 00:22:23.561 "nvme_admin": true, 00:22:23.561 "nvme_io": true, 00:22:23.561 "nvme_io_md": false, 00:22:23.561 "write_zeroes": true, 00:22:23.561 "zcopy": false, 00:22:23.561 "get_zone_info": false, 00:22:23.561 "zone_management": false, 00:22:23.561 "zone_append": false, 00:22:23.561 "compare": true, 00:22:23.561 "compare_and_write": false, 00:22:23.561 "abort": true, 00:22:23.561 "seek_hole": false, 00:22:23.561 "seek_data": false, 00:22:23.561 "copy": true, 00:22:23.561 "nvme_iov_md": false 00:22:23.561 }, 00:22:23.561 "driver_specific": { 00:22:23.561 "nvme": [ 00:22:23.561 { 00:22:23.561 "pci_address": "0000:00:11.0", 00:22:23.561 "trid": { 00:22:23.561 "trtype": "PCIe", 00:22:23.561 "traddr": "0000:00:11.0" 00:22:23.561 }, 00:22:23.561 "ctrlr_data": { 00:22:23.561 "cntlid": 0, 00:22:23.561 "vendor_id": "0x1b36", 00:22:23.561 "model_number": "QEMU NVMe Ctrl", 00:22:23.561 "serial_number": "12341", 00:22:23.561 "firmware_revision": "8.0.0", 00:22:23.561 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:23.561 "oacs": { 00:22:23.561 "security": 0, 00:22:23.561 "format": 1, 00:22:23.561 "firmware": 0, 00:22:23.561 "ns_manage": 1 00:22:23.561 }, 00:22:23.561 "multi_ctrlr": false, 00:22:23.561 "ana_reporting": false 00:22:23.561 }, 00:22:23.561 "vs": { 00:22:23.561 "nvme_version": "1.4" 00:22:23.561 }, 00:22:23.561 "ns_data": { 00:22:23.561 "id": 1, 00:22:23.561 "can_share": false 00:22:23.561 } 00:22:23.561 } 00:22:23.561 ], 00:22:23.561 "mp_policy": "active_passive" 00:22:23.561 } 00:22:23.561 } 00:22:23.561 ]' 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:23.561 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:23.823 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=90dfaef5-0004-49f7-b5bf-fe18413ee04a 00:22:23.823 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:23.823 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90dfaef5-0004-49f7-b5bf-fe18413ee04a 00:22:24.083 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:24.343 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=3d55b11a-56aa-4c4e-afff-07c51208a5ca 00:22:24.343 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3d55b11a-56aa-4c4e-afff-07c51208a5ca 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:24.606 03:22:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:24.868 { 00:22:24.868 "name": "b9c95d4c-6014-4ec2-8777-aebfbb05fe6f", 00:22:24.868 "aliases": [ 00:22:24.868 "lvs/nvme0n1p0" 00:22:24.868 ], 00:22:24.868 "product_name": "Logical Volume", 00:22:24.868 "block_size": 4096, 00:22:24.868 "num_blocks": 26476544, 00:22:24.868 "uuid": "b9c95d4c-6014-4ec2-8777-aebfbb05fe6f", 00:22:24.868 "assigned_rate_limits": { 00:22:24.868 "rw_ios_per_sec": 0, 00:22:24.868 "rw_mbytes_per_sec": 0, 00:22:24.868 "r_mbytes_per_sec": 0, 00:22:24.868 "w_mbytes_per_sec": 0 00:22:24.868 }, 00:22:24.868 "claimed": false, 00:22:24.868 "zoned": false, 00:22:24.868 "supported_io_types": { 00:22:24.868 "read": true, 00:22:24.868 "write": true, 00:22:24.868 "unmap": true, 00:22:24.868 "flush": false, 00:22:24.868 "reset": true, 00:22:24.868 "nvme_admin": false, 00:22:24.868 "nvme_io": false, 00:22:24.868 "nvme_io_md": false, 00:22:24.868 "write_zeroes": true, 00:22:24.868 "zcopy": false, 00:22:24.868 "get_zone_info": false, 00:22:24.868 "zone_management": false, 00:22:24.868 "zone_append": false, 00:22:24.868 "compare": false, 00:22:24.868 "compare_and_write": false, 00:22:24.868 "abort": false, 00:22:24.868 "seek_hole": true, 00:22:24.868 "seek_data": true, 00:22:24.868 "copy": false, 00:22:24.868 "nvme_iov_md": false 00:22:24.868 }, 00:22:24.868 "driver_specific": { 00:22:24.868 "lvol": { 00:22:24.868 "lvol_store_uuid": "3d55b11a-56aa-4c4e-afff-07c51208a5ca", 00:22:24.868 "base_bdev": "nvme0n1", 00:22:24.868 "thin_provision": true, 00:22:24.868 "num_allocated_clusters": 0, 00:22:24.868 "snapshot": false, 00:22:24.868 "clone": false, 00:22:24.868 "esnap_clone": false 00:22:24.868 } 00:22:24.868 } 00:22:24.868 } 00:22:24.868 ]' 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:24.868 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:25.129 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:25.390 { 00:22:25.390 "name": "b9c95d4c-6014-4ec2-8777-aebfbb05fe6f", 00:22:25.390 "aliases": [ 00:22:25.390 "lvs/nvme0n1p0" 00:22:25.390 ], 00:22:25.390 "product_name": "Logical Volume", 00:22:25.390 "block_size": 4096, 00:22:25.390 "num_blocks": 26476544, 00:22:25.390 "uuid": "b9c95d4c-6014-4ec2-8777-aebfbb05fe6f", 00:22:25.390 "assigned_rate_limits": { 00:22:25.390 "rw_ios_per_sec": 0, 00:22:25.390 "rw_mbytes_per_sec": 0, 00:22:25.390 "r_mbytes_per_sec": 0, 00:22:25.390 "w_mbytes_per_sec": 0 00:22:25.390 }, 00:22:25.390 "claimed": false, 00:22:25.390 "zoned": false, 00:22:25.390 "supported_io_types": { 00:22:25.390 "read": true, 00:22:25.390 "write": true, 00:22:25.390 "unmap": true, 00:22:25.390 "flush": false, 00:22:25.390 "reset": true, 00:22:25.390 "nvme_admin": false, 00:22:25.390 "nvme_io": false, 00:22:25.390 "nvme_io_md": false, 00:22:25.390 "write_zeroes": true, 00:22:25.390 "zcopy": false, 00:22:25.390 "get_zone_info": false, 00:22:25.390 "zone_management": false, 00:22:25.390 "zone_append": false, 00:22:25.390 "compare": false, 00:22:25.390 "compare_and_write": false, 00:22:25.390 "abort": false, 00:22:25.390 "seek_hole": true, 00:22:25.390 "seek_data": true, 00:22:25.390 "copy": false, 00:22:25.390 "nvme_iov_md": false 00:22:25.390 }, 00:22:25.390 "driver_specific": { 00:22:25.390 "lvol": { 00:22:25.390 "lvol_store_uuid": "3d55b11a-56aa-4c4e-afff-07c51208a5ca", 00:22:25.390 "base_bdev": "nvme0n1", 00:22:25.390 "thin_provision": true, 00:22:25.390 "num_allocated_clusters": 0, 00:22:25.390 "snapshot": false, 00:22:25.390 "clone": false, 00:22:25.390 "esnap_clone": false 00:22:25.390 } 00:22:25.390 } 00:22:25.390 } 00:22:25.390 ]' 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:25.390 03:22:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:25.651 03:22:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9c95d4c-6014-4ec2-8777-aebfbb05fe6f 00:22:25.651 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:25.651 { 00:22:25.651 "name": "b9c95d4c-6014-4ec2-8777-aebfbb05fe6f", 00:22:25.651 "aliases": [ 00:22:25.651 "lvs/nvme0n1p0" 00:22:25.651 ], 00:22:25.651 "product_name": "Logical Volume", 00:22:25.651 "block_size": 4096, 00:22:25.651 "num_blocks": 26476544, 00:22:25.651 "uuid": "b9c95d4c-6014-4ec2-8777-aebfbb05fe6f", 00:22:25.651 "assigned_rate_limits": { 00:22:25.651 "rw_ios_per_sec": 0, 00:22:25.651 "rw_mbytes_per_sec": 0, 00:22:25.651 "r_mbytes_per_sec": 0, 00:22:25.651 "w_mbytes_per_sec": 0 00:22:25.651 }, 00:22:25.651 "claimed": false, 00:22:25.651 "zoned": false, 00:22:25.651 "supported_io_types": { 00:22:25.651 "read": true, 00:22:25.651 "write": true, 00:22:25.651 "unmap": true, 00:22:25.651 "flush": false, 00:22:25.651 "reset": true, 00:22:25.651 "nvme_admin": false, 00:22:25.651 "nvme_io": false, 00:22:25.651 "nvme_io_md": false, 00:22:25.651 "write_zeroes": true, 00:22:25.651 "zcopy": false, 00:22:25.651 "get_zone_info": false, 00:22:25.651 "zone_management": false, 00:22:25.651 "zone_append": false, 00:22:25.651 "compare": false, 00:22:25.651 "compare_and_write": false, 00:22:25.651 "abort": false, 00:22:25.651 "seek_hole": true, 00:22:25.651 "seek_data": true, 00:22:25.651 "copy": false, 00:22:25.651 "nvme_iov_md": false 00:22:25.651 }, 00:22:25.651 "driver_specific": { 00:22:25.651 "lvol": { 00:22:25.651 "lvol_store_uuid": "3d55b11a-56aa-4c4e-afff-07c51208a5ca", 00:22:25.651 "base_bdev": "nvme0n1", 00:22:25.651 "thin_provision": true, 00:22:25.651 "num_allocated_clusters": 0, 00:22:25.651 "snapshot": false, 00:22:25.651 "clone": false, 00:22:25.651 "esnap_clone": false 00:22:25.651 } 00:22:25.651 } 00:22:25.651 } 00:22:25.651 ]' 00:22:25.651 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b9c95d4c-6014-4ec2-8777-aebfbb05fe6f --l2p_dram_limit 10' 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:25.912 03:22:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b9c95d4c-6014-4ec2-8777-aebfbb05fe6f --l2p_dram_limit 10 -c nvc0n1p0 00:22:25.912 [2024-11-18 03:22:29.454460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.912 [2024-11-18 03:22:29.454513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:25.912 [2024-11-18 03:22:29.454523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:25.912 [2024-11-18 03:22:29.454531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.912 [2024-11-18 03:22:29.454572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.912 [2024-11-18 03:22:29.454582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.913 [2024-11-18 03:22:29.454588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:25.913 [2024-11-18 03:22:29.454599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.454620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:25.913 [2024-11-18 03:22:29.454823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:25.913 [2024-11-18 03:22:29.454835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.454842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.913 [2024-11-18 03:22:29.454851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:22:25.913 [2024-11-18 03:22:29.454860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.454883] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 87ba54cb-cd24-476c-9269-cc1d7eb66ebe 00:22:25.913 [2024-11-18 03:22:29.455918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.455970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:25.913 [2024-11-18 03:22:29.455981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:25.913 [2024-11-18 03:22:29.455987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.460726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.460752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.913 [2024-11-18 03:22:29.460762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:22:25.913 [2024-11-18 03:22:29.460771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.460829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.460835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.913 [2024-11-18 03:22:29.460843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:25.913 [2024-11-18 03:22:29.460852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.460893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.460900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:25.913 [2024-11-18 03:22:29.460908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:25.913 [2024-11-18 03:22:29.460915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.460933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:25.913 [2024-11-18 03:22:29.462180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.462208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.913 [2024-11-18 03:22:29.462218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:22:25.913 [2024-11-18 03:22:29.462225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.462249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.462260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:25.913 [2024-11-18 03:22:29.462266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:25.913 [2024-11-18 03:22:29.462275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.462288] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:25.913 [2024-11-18 03:22:29.462410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:25.913 [2024-11-18 03:22:29.462422] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:25.913 [2024-11-18 03:22:29.462432] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:25.913 [2024-11-18 03:22:29.462439] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462449] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462455] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:25.913 [2024-11-18 03:22:29.462464] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:25.913 [2024-11-18 03:22:29.462469] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:25.913 [2024-11-18 03:22:29.462476] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:25.913 [2024-11-18 03:22:29.462491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.462498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:25.913 [2024-11-18 03:22:29.462503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:22:25.913 [2024-11-18 03:22:29.462511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.462576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.913 [2024-11-18 03:22:29.462585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:25.913 [2024-11-18 03:22:29.462591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:25.913 [2024-11-18 03:22:29.462597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.913 [2024-11-18 03:22:29.462668] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:25.913 [2024-11-18 03:22:29.462678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:25.913 [2024-11-18 03:22:29.462685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:25.913 [2024-11-18 03:22:29.462704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:25.913 [2024-11-18 03:22:29.462721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.913 [2024-11-18 03:22:29.462733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:25.913 [2024-11-18 03:22:29.462741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:25.913 [2024-11-18 03:22:29.462746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.913 [2024-11-18 03:22:29.462753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:25.913 [2024-11-18 03:22:29.462758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:25.913 [2024-11-18 03:22:29.462764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:25.913 [2024-11-18 03:22:29.462778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:25.913 [2024-11-18 03:22:29.462795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:25.913 [2024-11-18 03:22:29.462812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:25.913 [2024-11-18 03:22:29.462828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:25.913 [2024-11-18 03:22:29.462850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:25.913 [2024-11-18 03:22:29.462856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.913 [2024-11-18 03:22:29.462863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:25.914 [2024-11-18 03:22:29.462869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:25.914 [2024-11-18 03:22:29.462876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.914 [2024-11-18 03:22:29.462882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:25.914 [2024-11-18 03:22:29.462890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:25.914 [2024-11-18 03:22:29.462895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.914 [2024-11-18 03:22:29.462902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:25.914 [2024-11-18 03:22:29.462908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:25.914 [2024-11-18 03:22:29.462915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.914 [2024-11-18 03:22:29.462921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:25.914 [2024-11-18 03:22:29.462928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:25.914 [2024-11-18 03:22:29.462934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.914 [2024-11-18 03:22:29.462941] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:25.914 [2024-11-18 03:22:29.462947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:25.914 [2024-11-18 03:22:29.462956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.914 [2024-11-18 03:22:29.462962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.914 [2024-11-18 03:22:29.462971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:25.914 [2024-11-18 03:22:29.462977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:25.914 [2024-11-18 03:22:29.462984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:25.914 [2024-11-18 03:22:29.462990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:25.914 [2024-11-18 03:22:29.462998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:25.914 [2024-11-18 03:22:29.463004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:25.914 [2024-11-18 03:22:29.463013] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:25.914 [2024-11-18 03:22:29.463020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:25.914 [2024-11-18 03:22:29.463036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:25.914 [2024-11-18 03:22:29.463043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:25.914 [2024-11-18 03:22:29.463049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:25.914 [2024-11-18 03:22:29.463057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:25.914 [2024-11-18 03:22:29.463063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:25.914 [2024-11-18 03:22:29.463073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:25.914 [2024-11-18 03:22:29.463079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:25.914 [2024-11-18 03:22:29.463086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:25.914 [2024-11-18 03:22:29.463093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:25.914 [2024-11-18 03:22:29.463128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:25.914 [2024-11-18 03:22:29.463136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:25.914 [2024-11-18 03:22:29.463153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:25.914 [2024-11-18 03:22:29.463161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:25.914 [2024-11-18 03:22:29.463168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:25.914 [2024-11-18 03:22:29.463175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.914 [2024-11-18 03:22:29.463184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:25.914 [2024-11-18 03:22:29.463193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:22:25.914 [2024-11-18 03:22:29.463199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.914 [2024-11-18 03:22:29.463245] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:25.914 [2024-11-18 03:22:29.463253] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:30.120 [2024-11-18 03:22:33.455737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.455826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:30.120 [2024-11-18 03:22:33.455849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3992.467 ms 00:22:30.120 [2024-11-18 03:22:33.455867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.469614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.469676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:30.120 [2024-11-18 03:22:33.469693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.625 ms 00:22:30.120 [2024-11-18 03:22:33.469702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.469832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.469850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:30.120 [2024-11-18 03:22:33.469866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:30.120 [2024-11-18 03:22:33.469874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.481589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.481644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.120 [2024-11-18 03:22:33.481658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.650 ms 00:22:30.120 [2024-11-18 03:22:33.481666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.481702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.481713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.120 [2024-11-18 03:22:33.481725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:30.120 [2024-11-18 03:22:33.481733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.482280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.482304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.120 [2024-11-18 03:22:33.482357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:22:30.120 [2024-11-18 03:22:33.482366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.482506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.482517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.120 [2024-11-18 03:22:33.482532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:22:30.120 [2024-11-18 03:22:33.482541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.497747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.497815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.120 [2024-11-18 03:22:33.497835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.176 ms 00:22:30.120 [2024-11-18 03:22:33.497846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.508279] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:30.120 [2024-11-18 03:22:33.512080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.512416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:30.120 [2024-11-18 03:22:33.512437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.107 ms 00:22:30.120 [2024-11-18 03:22:33.512449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.606289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.606376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:30.120 [2024-11-18 03:22:33.606391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.803 ms 00:22:30.120 [2024-11-18 03:22:33.606409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.606636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.606652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:30.120 [2024-11-18 03:22:33.606661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:22:30.120 [2024-11-18 03:22:33.606672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.612609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.612663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:30.120 [2024-11-18 03:22:33.612675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.897 ms 00:22:30.120 [2024-11-18 03:22:33.612686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.617511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.617563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:30.120 [2024-11-18 03:22:33.617574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:22:30.120 [2024-11-18 03:22:33.617584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.617917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.617930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:30.120 [2024-11-18 03:22:33.617939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:22:30.120 [2024-11-18 03:22:33.617951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.663094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.663155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:30.120 [2024-11-18 03:22:33.663169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.121 ms 00:22:30.120 [2024-11-18 03:22:33.663180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.670210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.670272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:30.120 [2024-11-18 03:22:33.670283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.952 ms 00:22:30.120 [2024-11-18 03:22:33.670294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.676041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.676097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:30.120 [2024-11-18 03:22:33.676107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.676 ms 00:22:30.120 [2024-11-18 03:22:33.676117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.682466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.682547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:30.120 [2024-11-18 03:22:33.682558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.303 ms 00:22:30.120 [2024-11-18 03:22:33.682572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.682622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.682634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:30.120 [2024-11-18 03:22:33.682643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:30.120 [2024-11-18 03:22:33.682653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.682726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.120 [2024-11-18 03:22:33.682739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:30.120 [2024-11-18 03:22:33.682747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:30.120 [2024-11-18 03:22:33.682765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.120 [2024-11-18 03:22:33.683920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4228.983 ms, result 0 00:22:30.120 { 00:22:30.120 "name": "ftl0", 00:22:30.120 "uuid": "87ba54cb-cd24-476c-9269-cc1d7eb66ebe" 00:22:30.120 } 00:22:30.382 03:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:30.382 03:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:30.382 03:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:30.382 03:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:30.382 03:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:30.643 /dev/nbd0 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:30.643 1+0 records in 00:22:30.643 1+0 records out 00:22:30.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430525 s, 9.5 MB/s 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:30.643 03:22:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:30.904 [2024-11-18 03:22:34.256557] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:30.904 [2024-11-18 03:22:34.256714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89486 ] 00:22:30.904 [2024-11-18 03:22:34.411995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.904 [2024-11-18 03:22:34.461300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.289  [2024-11-18T03:22:36.807Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-18T03:22:37.750Z] Copying: 376/1024 [MB] (190 MBps) [2024-11-18T03:22:38.693Z] Copying: 571/1024 [MB] (194 MBps) [2024-11-18T03:22:39.627Z] Copying: 764/1024 [MB] (192 MBps) [2024-11-18T03:22:39.627Z] Copying: 1010/1024 [MB] (245 MBps) [2024-11-18T03:22:39.886Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:22:36.309 00:22:36.309 03:22:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:38.867 03:22:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:38.867 [2024-11-18 03:22:42.002083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:38.867 [2024-11-18 03:22:42.002200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89572 ] 00:22:38.867 [2024-11-18 03:22:42.151594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.867 [2024-11-18 03:22:42.184835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.803  [2024-11-18T03:22:44.322Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-18T03:22:45.266Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-18T03:22:46.653Z] Copying: 59/1024 [MB] (23 MBps) [2024-11-18T03:22:47.596Z] Copying: 81/1024 [MB] (22 MBps) [2024-11-18T03:22:48.539Z] Copying: 102/1024 [MB] (20 MBps) [2024-11-18T03:22:49.484Z] Copying: 118/1024 [MB] (16 MBps) [2024-11-18T03:22:50.429Z] Copying: 138/1024 [MB] (19 MBps) [2024-11-18T03:22:51.371Z] Copying: 153/1024 [MB] (15 MBps) [2024-11-18T03:22:52.316Z] Copying: 177/1024 [MB] (23 MBps) [2024-11-18T03:22:53.258Z] Copying: 201/1024 [MB] (24 MBps) [2024-11-18T03:22:54.642Z] Copying: 222/1024 [MB] (20 MBps) [2024-11-18T03:22:55.647Z] Copying: 251/1024 [MB] (28 MBps) [2024-11-18T03:22:56.589Z] Copying: 273/1024 [MB] (21 MBps) [2024-11-18T03:22:57.528Z] Copying: 294/1024 [MB] (21 MBps) [2024-11-18T03:22:58.469Z] Copying: 323/1024 [MB] (29 MBps) [2024-11-18T03:22:59.412Z] Copying: 358/1024 [MB] (34 MBps) [2024-11-18T03:23:00.355Z] Copying: 385/1024 [MB] (27 MBps) [2024-11-18T03:23:01.299Z] Copying: 411/1024 [MB] (25 MBps) [2024-11-18T03:23:02.241Z] Copying: 443/1024 [MB] (31 MBps) [2024-11-18T03:23:03.625Z] Copying: 466/1024 [MB] (23 MBps) [2024-11-18T03:23:04.567Z] Copying: 497/1024 [MB] (31 MBps) [2024-11-18T03:23:05.510Z] Copying: 529/1024 [MB] (32 MBps) [2024-11-18T03:23:06.450Z] Copying: 562/1024 [MB] (32 MBps) [2024-11-18T03:23:07.387Z] Copying: 593/1024 [MB] (31 MBps) [2024-11-18T03:23:08.325Z] Copying: 628/1024 [MB] (34 MBps) [2024-11-18T03:23:09.265Z] Copying: 663/1024 [MB] (35 MBps) [2024-11-18T03:23:10.709Z] Copying: 696/1024 [MB] (32 MBps) [2024-11-18T03:23:11.282Z] Copying: 732/1024 [MB] (35 MBps) [2024-11-18T03:23:12.667Z] Copying: 770/1024 [MB] (38 MBps) [2024-11-18T03:23:13.237Z] Copying: 803/1024 [MB] (33 MBps) [2024-11-18T03:23:14.620Z] Copying: 836/1024 [MB] (32 MBps) [2024-11-18T03:23:15.561Z] Copying: 869/1024 [MB] (33 MBps) [2024-11-18T03:23:16.497Z] Copying: 905/1024 [MB] (35 MBps) [2024-11-18T03:23:17.437Z] Copying: 941/1024 [MB] (35 MBps) [2024-11-18T03:23:18.377Z] Copying: 974/1024 [MB] (33 MBps) [2024-11-18T03:23:18.948Z] Copying: 1008/1024 [MB] (33 MBps) [2024-11-18T03:23:18.948Z] Copying: 1024/1024 [MB] (average 28 MBps) 00:23:15.371 00:23:15.371 03:23:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:15.371 03:23:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:15.629 03:23:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:15.890 [2024-11-18 03:23:19.268168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.890 [2024-11-18 03:23:19.268214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:15.890 [2024-11-18 03:23:19.268228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:15.890 [2024-11-18 03:23:19.268236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.890 [2024-11-18 03:23:19.268256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.890 [2024-11-18 03:23:19.268783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.890 [2024-11-18 03:23:19.268807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:15.890 [2024-11-18 03:23:19.268816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:23:15.890 [2024-11-18 03:23:19.268826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.890 [2024-11-18 03:23:19.270728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.890 [2024-11-18 03:23:19.270758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:15.890 [2024-11-18 03:23:19.270766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:23:15.890 [2024-11-18 03:23:19.270774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.890 [2024-11-18 03:23:19.287560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.890 [2024-11-18 03:23:19.287592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:15.890 [2024-11-18 03:23:19.287601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.772 ms 00:23:15.890 [2024-11-18 03:23:19.287608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.292353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.292377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:15.891 [2024-11-18 03:23:19.292386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.717 ms 00:23:15.891 [2024-11-18 03:23:19.292394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.294529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.294560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:15.891 [2024-11-18 03:23:19.294568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:23:15.891 [2024-11-18 03:23:19.294576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.300036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.300068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.891 [2024-11-18 03:23:19.300078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:23:15.891 [2024-11-18 03:23:19.300086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.300181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.300190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.891 [2024-11-18 03:23:19.300196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:15.891 [2024-11-18 03:23:19.300204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.303031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.303061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:15.891 [2024-11-18 03:23:19.303068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:23:15.891 [2024-11-18 03:23:19.303076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.305135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.305169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:15.891 [2024-11-18 03:23:19.305177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:23:15.891 [2024-11-18 03:23:19.305185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.306542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.306571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:15.891 [2024-11-18 03:23:19.306578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:23:15.891 [2024-11-18 03:23:19.306585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.308378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.891 [2024-11-18 03:23:19.308405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:15.891 [2024-11-18 03:23:19.308412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:23:15.891 [2024-11-18 03:23:19.308419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.891 [2024-11-18 03:23:19.308444] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:15.891 [2024-11-18 03:23:19.308458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:15.891 [2024-11-18 03:23:19.308902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.308995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:15.892 [2024-11-18 03:23:19.309177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:15.892 [2024-11-18 03:23:19.309188] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 87ba54cb-cd24-476c-9269-cc1d7eb66ebe 00:23:15.892 [2024-11-18 03:23:19.309197] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:15.892 [2024-11-18 03:23:19.309203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:15.892 [2024-11-18 03:23:19.309210] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:15.892 [2024-11-18 03:23:19.309216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:15.892 [2024-11-18 03:23:19.309223] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:15.892 [2024-11-18 03:23:19.309230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:15.892 [2024-11-18 03:23:19.309238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:15.892 [2024-11-18 03:23:19.309243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:15.892 [2024-11-18 03:23:19.309249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:15.892 [2024-11-18 03:23:19.309254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.892 [2024-11-18 03:23:19.309261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:15.892 [2024-11-18 03:23:19.309268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:23:15.892 [2024-11-18 03:23:19.309279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.311158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.892 [2024-11-18 03:23:19.311253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:15.892 [2024-11-18 03:23:19.311368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:23:15.892 [2024-11-18 03:23:19.311390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.311489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.892 [2024-11-18 03:23:19.311517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:15.892 [2024-11-18 03:23:19.311565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:15.892 [2024-11-18 03:23:19.311585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.317612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.317712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.892 [2024-11-18 03:23:19.317756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.317776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.317832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.317886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.892 [2024-11-18 03:23:19.317904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.317920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.317992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.318026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.892 [2024-11-18 03:23:19.318042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.318060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.318163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.318185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.892 [2024-11-18 03:23:19.318201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.318218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.328997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.329119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.892 [2024-11-18 03:23:19.329160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.329181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.338082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.338197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.892 [2024-11-18 03:23:19.338210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.338219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.338286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.338297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.892 [2024-11-18 03:23:19.338304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.338325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.338363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.338372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.892 [2024-11-18 03:23:19.338379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.338386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.338448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.338462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.892 [2024-11-18 03:23:19.338468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.338476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.338525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.892 [2024-11-18 03:23:19.338539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:15.892 [2024-11-18 03:23:19.338546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.892 [2024-11-18 03:23:19.338555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.892 [2024-11-18 03:23:19.338590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.893 [2024-11-18 03:23:19.338603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.893 [2024-11-18 03:23:19.338610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.893 [2024-11-18 03:23:19.338621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.893 [2024-11-18 03:23:19.338673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.893 [2024-11-18 03:23:19.338682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.893 [2024-11-18 03:23:19.338691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.893 [2024-11-18 03:23:19.338699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.893 [2024-11-18 03:23:19.338840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.629 ms, result 0 00:23:15.893 true 00:23:15.893 03:23:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89336 00:23:15.893 03:23:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89336 00:23:15.893 03:23:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:15.893 [2024-11-18 03:23:19.426032] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.893 [2024-11-18 03:23:19.426260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89966 ] 00:23:16.152 [2024-11-18 03:23:19.575463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.152 [2024-11-18 03:23:19.626026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.534  [2024-11-18T03:23:22.110Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-18T03:23:23.046Z] Copying: 427/1024 [MB] (240 MBps) [2024-11-18T03:23:23.980Z] Copying: 683/1024 [MB] (256 MBps) [2024-11-18T03:23:24.238Z] Copying: 934/1024 [MB] (250 MBps) [2024-11-18T03:23:24.496Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:23:20.919 00:23:20.919 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89336 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:20.919 03:23:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.919 [2024-11-18 03:23:24.329968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:20.920 [2024-11-18 03:23:24.330144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90021 ] 00:23:20.920 [2024-11-18 03:23:24.477305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.177 [2024-11-18 03:23:24.519107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.177 [2024-11-18 03:23:24.618897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.177 [2024-11-18 03:23:24.618961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.177 [2024-11-18 03:23:24.681616] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:21.177 [2024-11-18 03:23:24.682133] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:21.177 [2024-11-18 03:23:24.682817] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:21.746 [2024-11-18 03:23:25.041551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.041584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.746 [2024-11-18 03:23:25.041595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:21.746 [2024-11-18 03:23:25.041602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.041644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.041656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.746 [2024-11-18 03:23:25.041662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:21.746 [2024-11-18 03:23:25.041668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.041681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.746 [2024-11-18 03:23:25.041875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.746 [2024-11-18 03:23:25.041887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.041893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.746 [2024-11-18 03:23:25.041899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:23:21.746 [2024-11-18 03:23:25.041905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.043180] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:21.746 [2024-11-18 03:23:25.046036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.046067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:21.746 [2024-11-18 03:23:25.046076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:23:21.746 [2024-11-18 03:23:25.046082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.046131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.046138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:21.746 [2024-11-18 03:23:25.046145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:21.746 [2024-11-18 03:23:25.046151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.052436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.052593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.746 [2024-11-18 03:23:25.052606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:23:21.746 [2024-11-18 03:23:25.052613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.052684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.052691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.746 [2024-11-18 03:23:25.052698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:21.746 [2024-11-18 03:23:25.052704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.052744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.052755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.746 [2024-11-18 03:23:25.052761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:21.746 [2024-11-18 03:23:25.052767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.052784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.746 [2024-11-18 03:23:25.054451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.054473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.746 [2024-11-18 03:23:25.054484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:23:21.746 [2024-11-18 03:23:25.054500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.054527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.746 [2024-11-18 03:23:25.054534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.746 [2024-11-18 03:23:25.054541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:21.746 [2024-11-18 03:23:25.054547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.746 [2024-11-18 03:23:25.054564] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:21.746 [2024-11-18 03:23:25.054581] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:21.747 [2024-11-18 03:23:25.054613] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:21.747 [2024-11-18 03:23:25.054628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:21.747 [2024-11-18 03:23:25.054711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.747 [2024-11-18 03:23:25.054720] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.747 [2024-11-18 03:23:25.054729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:21.747 [2024-11-18 03:23:25.054740] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.747 [2024-11-18 03:23:25.054748] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.747 [2024-11-18 03:23:25.054755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:21.747 [2024-11-18 03:23:25.054763] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.747 [2024-11-18 03:23:25.054769] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.747 [2024-11-18 03:23:25.054775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.747 [2024-11-18 03:23:25.054781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.747 [2024-11-18 03:23:25.054789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.747 [2024-11-18 03:23:25.054797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:23:21.747 [2024-11-18 03:23:25.054805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.747 [2024-11-18 03:23:25.054874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.747 [2024-11-18 03:23:25.054881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.747 [2024-11-18 03:23:25.054887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:21.747 [2024-11-18 03:23:25.054901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.747 [2024-11-18 03:23:25.054975] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.747 [2024-11-18 03:23:25.054984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.747 [2024-11-18 03:23:25.054993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.747 [2024-11-18 03:23:25.055019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.747 [2024-11-18 03:23:25.055038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.747 [2024-11-18 03:23:25.055049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.747 [2024-11-18 03:23:25.055054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:21.747 [2024-11-18 03:23:25.055059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.747 [2024-11-18 03:23:25.055065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.747 [2024-11-18 03:23:25.055070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:21.747 [2024-11-18 03:23:25.055077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.747 [2024-11-18 03:23:25.055088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.747 [2024-11-18 03:23:25.055109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.747 [2024-11-18 03:23:25.055127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.747 [2024-11-18 03:23:25.055146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.747 [2024-11-18 03:23:25.055163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.747 [2024-11-18 03:23:25.055182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.747 [2024-11-18 03:23:25.055194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.747 [2024-11-18 03:23:25.055200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:21.747 [2024-11-18 03:23:25.055208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.747 [2024-11-18 03:23:25.055214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.747 [2024-11-18 03:23:25.055220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:21.747 [2024-11-18 03:23:25.055226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.747 [2024-11-18 03:23:25.055238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:21.747 [2024-11-18 03:23:25.055244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055250] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.747 [2024-11-18 03:23:25.055260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.747 [2024-11-18 03:23:25.055267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.747 [2024-11-18 03:23:25.055283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.747 [2024-11-18 03:23:25.055289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.747 [2024-11-18 03:23:25.055295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.747 [2024-11-18 03:23:25.055301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.747 [2024-11-18 03:23:25.055307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.747 [2024-11-18 03:23:25.055330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.747 [2024-11-18 03:23:25.055338] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.747 [2024-11-18 03:23:25.055346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:21.747 [2024-11-18 03:23:25.055364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:21.747 [2024-11-18 03:23:25.055370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:21.747 [2024-11-18 03:23:25.055376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:21.747 [2024-11-18 03:23:25.055383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:21.747 [2024-11-18 03:23:25.055390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:21.747 [2024-11-18 03:23:25.055396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:21.747 [2024-11-18 03:23:25.055402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:21.747 [2024-11-18 03:23:25.055409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:21.747 [2024-11-18 03:23:25.055415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:21.747 [2024-11-18 03:23:25.055451] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.747 [2024-11-18 03:23:25.055461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.747 [2024-11-18 03:23:25.055476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.747 [2024-11-18 03:23:25.055482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.747 [2024-11-18 03:23:25.055488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.747 [2024-11-18 03:23:25.055493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.747 [2024-11-18 03:23:25.055503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.748 [2024-11-18 03:23:25.055509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:23:21.748 [2024-11-18 03:23:25.055515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.074566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.074730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.748 [2024-11-18 03:23:25.074800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.014 ms 00:23:21.748 [2024-11-18 03:23:25.074830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.074955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.075030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.748 [2024-11-18 03:23:25.075068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:21.748 [2024-11-18 03:23:25.075093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.085291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.085412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.748 [2024-11-18 03:23:25.085466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.072 ms 00:23:21.748 [2024-11-18 03:23:25.085483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.085524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.085547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.748 [2024-11-18 03:23:25.085563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:21.748 [2024-11-18 03:23:25.085582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.086003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.086142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.748 [2024-11-18 03:23:25.086186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:23:21.748 [2024-11-18 03:23:25.086206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.086343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.086363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.748 [2024-11-18 03:23:25.086382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:21.748 [2024-11-18 03:23:25.086429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.092047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.092129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.748 [2024-11-18 03:23:25.092177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.586 ms 00:23:21.748 [2024-11-18 03:23:25.092195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.095168] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:21.748 [2024-11-18 03:23:25.095266] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:21.748 [2024-11-18 03:23:25.095440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.095458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:21.748 [2024-11-18 03:23:25.095474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.169 ms 00:23:21.748 [2024-11-18 03:23:25.095488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.107283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.107391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:21.748 [2024-11-18 03:23:25.107435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.760 ms 00:23:21.748 [2024-11-18 03:23:25.107453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.109587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.109674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:21.748 [2024-11-18 03:23:25.109716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:23:21.748 [2024-11-18 03:23:25.109738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.111508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.111591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:21.748 [2024-11-18 03:23:25.111632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:23:21.748 [2024-11-18 03:23:25.111648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.112185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.112302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.748 [2024-11-18 03:23:25.112387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:23:21.748 [2024-11-18 03:23:25.112407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.131063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.131170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:21.748 [2024-11-18 03:23:25.131212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.601 ms 00:23:21.748 [2024-11-18 03:23:25.131235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.137981] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:21.748 [2024-11-18 03:23:25.140206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.140231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.748 [2024-11-18 03:23:25.140240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.857 ms 00:23:21.748 [2024-11-18 03:23:25.140247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.140301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.140321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:21.748 [2024-11-18 03:23:25.140329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:21.748 [2024-11-18 03:23:25.140337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.140406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.140415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.748 [2024-11-18 03:23:25.140425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:21.748 [2024-11-18 03:23:25.140433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.140452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.140472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.748 [2024-11-18 03:23:25.140479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.748 [2024-11-18 03:23:25.140485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.140515] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:21.748 [2024-11-18 03:23:25.140526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.140535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:21.748 [2024-11-18 03:23:25.140544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:21.748 [2024-11-18 03:23:25.140550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.144329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.144355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.748 [2024-11-18 03:23:25.144363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.761 ms 00:23:21.748 [2024-11-18 03:23:25.144370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.144427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.748 [2024-11-18 03:23:25.144436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.748 [2024-11-18 03:23:25.144443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:21.748 [2024-11-18 03:23:25.144449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.748 [2024-11-18 03:23:25.145309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.383 ms, result 0 00:23:22.681  [2024-11-18T03:23:27.191Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-18T03:23:28.565Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-18T03:23:29.499Z] Copying: 64/1024 [MB] (22 MBps) [2024-11-18T03:23:30.433Z] Copying: 78/1024 [MB] (14 MBps) [2024-11-18T03:23:31.367Z] Copying: 90/1024 [MB] (11 MBps) [2024-11-18T03:23:32.302Z] Copying: 101/1024 [MB] (10 MBps) [2024-11-18T03:23:33.238Z] Copying: 112/1024 [MB] (11 MBps) [2024-11-18T03:23:34.173Z] Copying: 123/1024 [MB] (11 MBps) [2024-11-18T03:23:35.548Z] Copying: 135/1024 [MB] (11 MBps) [2024-11-18T03:23:36.483Z] Copying: 146/1024 [MB] (11 MBps) [2024-11-18T03:23:37.419Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-18T03:23:38.396Z] Copying: 169/1024 [MB] (11 MBps) [2024-11-18T03:23:39.361Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-18T03:23:40.297Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-18T03:23:41.234Z] Copying: 202/1024 [MB] (10 MBps) [2024-11-18T03:23:42.177Z] Copying: 213/1024 [MB] (10 MBps) [2024-11-18T03:23:43.552Z] Copying: 223/1024 [MB] (10 MBps) [2024-11-18T03:23:44.487Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-18T03:23:45.420Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-18T03:23:46.356Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-18T03:23:47.290Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-18T03:23:48.225Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-18T03:23:49.600Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-18T03:23:50.166Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-18T03:23:51.542Z] Copying: 315/1024 [MB] (11 MBps) [2024-11-18T03:23:52.476Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-18T03:23:53.412Z] Copying: 337/1024 [MB] (11 MBps) [2024-11-18T03:23:54.349Z] Copying: 349/1024 [MB] (11 MBps) [2024-11-18T03:23:55.292Z] Copying: 360/1024 [MB] (11 MBps) [2024-11-18T03:23:56.227Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-18T03:23:57.160Z] Copying: 381/1024 [MB] (10 MBps) [2024-11-18T03:23:58.542Z] Copying: 393/1024 [MB] (11 MBps) [2024-11-18T03:23:59.478Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-18T03:24:00.413Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-18T03:24:01.347Z] Copying: 426/1024 [MB] (11 MBps) [2024-11-18T03:24:02.283Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-18T03:24:03.219Z] Copying: 448/1024 [MB] (11 MBps) [2024-11-18T03:24:04.595Z] Copying: 459/1024 [MB] (11 MBps) [2024-11-18T03:24:05.164Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-18T03:24:06.539Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-18T03:24:07.488Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-18T03:24:08.503Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-18T03:24:09.437Z] Copying: 516/1024 [MB] (11 MBps) [2024-11-18T03:24:10.372Z] Copying: 527/1024 [MB] (11 MBps) [2024-11-18T03:24:11.307Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-18T03:24:12.242Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-18T03:24:13.177Z] Copying: 561/1024 [MB] (11 MBps) [2024-11-18T03:24:14.553Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-18T03:24:15.488Z] Copying: 584/1024 [MB] (11 MBps) [2024-11-18T03:24:16.424Z] Copying: 595/1024 [MB] (11 MBps) [2024-11-18T03:24:17.367Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-18T03:24:18.310Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-18T03:24:19.247Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-18T03:24:20.295Z] Copying: 640/1024 [MB] (11 MBps) [2024-11-18T03:24:21.234Z] Copying: 651/1024 [MB] (11 MBps) [2024-11-18T03:24:22.169Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-18T03:24:23.548Z] Copying: 673/1024 [MB] (11 MBps) [2024-11-18T03:24:24.482Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-18T03:24:25.417Z] Copying: 695/1024 [MB] (11 MBps) [2024-11-18T03:24:26.349Z] Copying: 706/1024 [MB] (11 MBps) [2024-11-18T03:24:27.287Z] Copying: 718/1024 [MB] (11 MBps) [2024-11-18T03:24:28.227Z] Copying: 729/1024 [MB] (11 MBps) [2024-11-18T03:24:29.169Z] Copying: 740/1024 [MB] (10 MBps) [2024-11-18T03:24:30.550Z] Copying: 750/1024 [MB] (10 MBps) [2024-11-18T03:24:31.487Z] Copying: 761/1024 [MB] (10 MBps) [2024-11-18T03:24:32.424Z] Copying: 772/1024 [MB] (11 MBps) [2024-11-18T03:24:33.359Z] Copying: 783/1024 [MB] (11 MBps) [2024-11-18T03:24:34.294Z] Copying: 795/1024 [MB] (11 MBps) [2024-11-18T03:24:35.229Z] Copying: 807/1024 [MB] (11 MBps) [2024-11-18T03:24:36.165Z] Copying: 818/1024 [MB] (11 MBps) [2024-11-18T03:24:37.539Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-18T03:24:38.474Z] Copying: 841/1024 [MB] (11 MBps) [2024-11-18T03:24:39.408Z] Copying: 853/1024 [MB] (11 MBps) [2024-11-18T03:24:40.341Z] Copying: 864/1024 [MB] (11 MBps) [2024-11-18T03:24:41.275Z] Copying: 877/1024 [MB] (13 MBps) [2024-11-18T03:24:42.210Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-18T03:24:43.586Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-18T03:24:44.521Z] Copying: 911/1024 [MB] (11 MBps) [2024-11-18T03:24:45.456Z] Copying: 923/1024 [MB] (11 MBps) [2024-11-18T03:24:46.483Z] Copying: 934/1024 [MB] (11 MBps) [2024-11-18T03:24:47.417Z] Copying: 946/1024 [MB] (11 MBps) [2024-11-18T03:24:48.350Z] Copying: 957/1024 [MB] (11 MBps) [2024-11-18T03:24:49.285Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-18T03:24:50.219Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-18T03:24:51.594Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-18T03:24:52.161Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-18T03:24:53.540Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-18T03:24:54.110Z] Copying: 1047884/1048576 [kB] (8924 kBps) [2024-11-18T03:24:54.110Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-18 03:24:53.844734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.845563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:50.533 [2024-11-18 03:24:53.845999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:50.533 [2024-11-18 03:24:53.846055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.533 [2024-11-18 03:24:53.849973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:50.533 [2024-11-18 03:24:53.852762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.852925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:50.533 [2024-11-18 03:24:53.852995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:24:50.533 [2024-11-18 03:24:53.853020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.533 [2024-11-18 03:24:53.862814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.862970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:50.533 [2024-11-18 03:24:53.863037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.704 ms 00:24:50.533 [2024-11-18 03:24:53.863063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.533 [2024-11-18 03:24:53.888468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.888633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:50.533 [2024-11-18 03:24:53.888661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.371 ms 00:24:50.533 [2024-11-18 03:24:53.888670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.533 [2024-11-18 03:24:53.893729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.893773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:50.533 [2024-11-18 03:24:53.893784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:24:50.533 [2024-11-18 03:24:53.893791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.533 [2024-11-18 03:24:53.897011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.897057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:50.533 [2024-11-18 03:24:53.897066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:24:50.533 [2024-11-18 03:24:53.897073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.533 [2024-11-18 03:24:53.903087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.533 [2024-11-18 03:24:53.903149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:50.533 [2024-11-18 03:24:53.903160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.976 ms 00:24:50.533 [2024-11-18 03:24:53.903168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.795 [2024-11-18 03:24:54.161410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.795 [2024-11-18 03:24:54.161436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.795 [2024-11-18 03:24:54.161444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 258.199 ms 00:24:50.795 [2024-11-18 03:24:54.161450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.795 [2024-11-18 03:24:54.163817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.795 [2024-11-18 03:24:54.163842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.795 [2024-11-18 03:24:54.163849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:24:50.795 [2024-11-18 03:24:54.163855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.795 [2024-11-18 03:24:54.165906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.795 [2024-11-18 03:24:54.165928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.795 [2024-11-18 03:24:54.165935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:24:50.795 [2024-11-18 03:24:54.165940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.795 [2024-11-18 03:24:54.167546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.795 [2024-11-18 03:24:54.167570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.795 [2024-11-18 03:24:54.167576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:24:50.795 [2024-11-18 03:24:54.167581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.795 [2024-11-18 03:24:54.169173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.795 [2024-11-18 03:24:54.169196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.795 [2024-11-18 03:24:54.169203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:24:50.795 [2024-11-18 03:24:54.169209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.795 [2024-11-18 03:24:54.169230] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.795 [2024-11-18 03:24:54.169240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104192 / 261120 wr_cnt: 1 state: open 00:24:50.795 [2024-11-18 03:24:54.169249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.795 [2024-11-18 03:24:54.169552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.796 [2024-11-18 03:24:54.169869] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.796 [2024-11-18 03:24:54.169877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 87ba54cb-cd24-476c-9269-cc1d7eb66ebe 00:24:50.796 [2024-11-18 03:24:54.169886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104192 00:24:50.796 [2024-11-18 03:24:54.169892] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105152 00:24:50.796 [2024-11-18 03:24:54.169897] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104192 00:24:50.796 [2024-11-18 03:24:54.169903] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:24:50.796 [2024-11-18 03:24:54.169909] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.796 [2024-11-18 03:24:54.169915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.796 [2024-11-18 03:24:54.169920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.796 [2024-11-18 03:24:54.169925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.796 [2024-11-18 03:24:54.169930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.796 [2024-11-18 03:24:54.169941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.796 [2024-11-18 03:24:54.169947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.796 [2024-11-18 03:24:54.169953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:24:50.796 [2024-11-18 03:24:54.169959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.796 [2024-11-18 03:24:54.171632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.796 [2024-11-18 03:24:54.171655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.796 [2024-11-18 03:24:54.171662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:24:50.796 [2024-11-18 03:24:54.171668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.796 [2024-11-18 03:24:54.171753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.796 [2024-11-18 03:24:54.171760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.796 [2024-11-18 03:24:54.171769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:50.796 [2024-11-18 03:24:54.171775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.176869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.176958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.797 [2024-11-18 03:24:54.176997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.177015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.177063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.177080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.797 [2024-11-18 03:24:54.177098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.177113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.177168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.177191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.797 [2024-11-18 03:24:54.177207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.177267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.177292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.177309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.797 [2024-11-18 03:24:54.177337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.177355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.187609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.187725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.797 [2024-11-18 03:24:54.187766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.187784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.196111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.196228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.797 [2024-11-18 03:24:54.196439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.196458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.196515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.196533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.797 [2024-11-18 03:24:54.196548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.196564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.196594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.196618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.797 [2024-11-18 03:24:54.196636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.196736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.196826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.196881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.797 [2024-11-18 03:24:54.196899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.196936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.196976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.196995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.797 [2024-11-18 03:24:54.197011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.197026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.197103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.197124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.797 [2024-11-18 03:24:54.197141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.197156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.197206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.797 [2024-11-18 03:24:54.197225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.797 [2024-11-18 03:24:54.197264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.797 [2024-11-18 03:24:54.197281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.797 [2024-11-18 03:24:54.197417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.755 ms, result 0 00:24:51.370 00:24:51.370 00:24:51.370 03:24:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:53.921 03:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:53.921 [2024-11-18 03:24:57.230182] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:53.921 [2024-11-18 03:24:57.230328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90981 ] 00:24:53.921 [2024-11-18 03:24:57.376893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.921 [2024-11-18 03:24:57.423095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.183 [2024-11-18 03:24:57.521462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.183 [2024-11-18 03:24:57.521516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.183 [2024-11-18 03:24:57.673951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.673997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:54.183 [2024-11-18 03:24:57.674014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.183 [2024-11-18 03:24:57.674026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.674073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.674084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.183 [2024-11-18 03:24:57.674098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:54.183 [2024-11-18 03:24:57.674114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.674133] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:54.183 [2024-11-18 03:24:57.674395] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:54.183 [2024-11-18 03:24:57.674412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.674420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.183 [2024-11-18 03:24:57.674428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:24:54.183 [2024-11-18 03:24:57.674438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.675824] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:54.183 [2024-11-18 03:24:57.678962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.678997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:54.183 [2024-11-18 03:24:57.679006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.135 ms 00:24:54.183 [2024-11-18 03:24:57.679014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.679072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.679081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:54.183 [2024-11-18 03:24:57.679089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:54.183 [2024-11-18 03:24:57.679101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.685811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.685992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.183 [2024-11-18 03:24:57.686013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:24:54.183 [2024-11-18 03:24:57.686024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.686119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.686128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.183 [2024-11-18 03:24:57.686137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:54.183 [2024-11-18 03:24:57.686145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.686197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.686208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:54.183 [2024-11-18 03:24:57.686217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:54.183 [2024-11-18 03:24:57.686225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.686252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:54.183 [2024-11-18 03:24:57.688019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.688053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.183 [2024-11-18 03:24:57.688062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:24:54.183 [2024-11-18 03:24:57.688072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.688102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.183 [2024-11-18 03:24:57.688114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:54.183 [2024-11-18 03:24:57.688126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:54.183 [2024-11-18 03:24:57.688133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.183 [2024-11-18 03:24:57.688160] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:54.183 [2024-11-18 03:24:57.688183] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:54.184 [2024-11-18 03:24:57.688220] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:54.184 [2024-11-18 03:24:57.688236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:54.184 [2024-11-18 03:24:57.688357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:54.184 [2024-11-18 03:24:57.688374] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:54.184 [2024-11-18 03:24:57.688389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:54.184 [2024-11-18 03:24:57.688399] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688411] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688419] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:54.184 [2024-11-18 03:24:57.688430] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:54.184 [2024-11-18 03:24:57.688438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:54.184 [2024-11-18 03:24:57.688445] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:54.184 [2024-11-18 03:24:57.688453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.184 [2024-11-18 03:24:57.688461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:54.184 [2024-11-18 03:24:57.688469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:24:54.184 [2024-11-18 03:24:57.688475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.184 [2024-11-18 03:24:57.688558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.184 [2024-11-18 03:24:57.688569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:54.184 [2024-11-18 03:24:57.688576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:54.184 [2024-11-18 03:24:57.688583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.184 [2024-11-18 03:24:57.688679] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:54.184 [2024-11-18 03:24:57.688690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:54.184 [2024-11-18 03:24:57.688700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:54.184 [2024-11-18 03:24:57.688741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:54.184 [2024-11-18 03:24:57.688766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.184 [2024-11-18 03:24:57.688780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:54.184 [2024-11-18 03:24:57.688790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:54.184 [2024-11-18 03:24:57.688798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.184 [2024-11-18 03:24:57.688805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:54.184 [2024-11-18 03:24:57.688813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:54.184 [2024-11-18 03:24:57.688821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:54.184 [2024-11-18 03:24:57.688837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:54.184 [2024-11-18 03:24:57.688860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:54.184 [2024-11-18 03:24:57.688885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:54.184 [2024-11-18 03:24:57.688907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:54.184 [2024-11-18 03:24:57.688929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.184 [2024-11-18 03:24:57.688944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:54.184 [2024-11-18 03:24:57.688952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:54.184 [2024-11-18 03:24:57.688960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.184 [2024-11-18 03:24:57.688968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:54.184 [2024-11-18 03:24:57.688978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:54.184 [2024-11-18 03:24:57.688986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.184 [2024-11-18 03:24:57.688995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:54.184 [2024-11-18 03:24:57.689003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:54.184 [2024-11-18 03:24:57.689010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.689017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:54.184 [2024-11-18 03:24:57.689025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:54.184 [2024-11-18 03:24:57.689032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.689042] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:54.184 [2024-11-18 03:24:57.689050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:54.184 [2024-11-18 03:24:57.689059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.184 [2024-11-18 03:24:57.689071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.184 [2024-11-18 03:24:57.689079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:54.184 [2024-11-18 03:24:57.689087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:54.184 [2024-11-18 03:24:57.689093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:54.184 [2024-11-18 03:24:57.689100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:54.184 [2024-11-18 03:24:57.689106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:54.184 [2024-11-18 03:24:57.689113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:54.184 [2024-11-18 03:24:57.689122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:54.184 [2024-11-18 03:24:57.689132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.184 [2024-11-18 03:24:57.689144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:54.184 [2024-11-18 03:24:57.689151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:54.184 [2024-11-18 03:24:57.689158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:54.184 [2024-11-18 03:24:57.689165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:54.184 [2024-11-18 03:24:57.689173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:54.184 [2024-11-18 03:24:57.689179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:54.184 [2024-11-18 03:24:57.689186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:54.184 [2024-11-18 03:24:57.689193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:54.184 [2024-11-18 03:24:57.689200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:54.184 [2024-11-18 03:24:57.689208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:54.184 [2024-11-18 03:24:57.689214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:54.184 [2024-11-18 03:24:57.689222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:54.184 [2024-11-18 03:24:57.689228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:54.184 [2024-11-18 03:24:57.689235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:54.185 [2024-11-18 03:24:57.689244] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:54.185 [2024-11-18 03:24:57.689252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.185 [2024-11-18 03:24:57.689261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.185 [2024-11-18 03:24:57.689268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:54.185 [2024-11-18 03:24:57.689275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:54.185 [2024-11-18 03:24:57.689281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:54.185 [2024-11-18 03:24:57.689289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.689297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:54.185 [2024-11-18 03:24:57.689304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:24:54.185 [2024-11-18 03:24:57.689334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.714006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.714064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.185 [2024-11-18 03:24:57.714084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.625 ms 00:24:54.185 [2024-11-18 03:24:57.714096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.714232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.714247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:54.185 [2024-11-18 03:24:57.714261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:54.185 [2024-11-18 03:24:57.714273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.725210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.725397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.185 [2024-11-18 03:24:57.725419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.833 ms 00:24:54.185 [2024-11-18 03:24:57.725428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.725460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.725469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.185 [2024-11-18 03:24:57.725478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:54.185 [2024-11-18 03:24:57.725485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.725938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.725962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.185 [2024-11-18 03:24:57.725974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:24:54.185 [2024-11-18 03:24:57.725989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.726126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.726137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.185 [2024-11-18 03:24:57.726150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:24:54.185 [2024-11-18 03:24:57.726160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.732338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.732368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.185 [2024-11-18 03:24:57.732381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:24:54.185 [2024-11-18 03:24:57.732389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.735689] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:54.185 [2024-11-18 03:24:57.735725] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:54.185 [2024-11-18 03:24:57.735742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.735750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:54.185 [2024-11-18 03:24:57.735759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:24:54.185 [2024-11-18 03:24:57.735766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.750781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.750820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:54.185 [2024-11-18 03:24:57.750836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.974 ms 00:24:54.185 [2024-11-18 03:24:57.750845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.753236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.753272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:54.185 [2024-11-18 03:24:57.753281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.350 ms 00:24:54.185 [2024-11-18 03:24:57.753289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.755242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.755273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:54.185 [2024-11-18 03:24:57.755282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:24:54.185 [2024-11-18 03:24:57.755289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.185 [2024-11-18 03:24:57.755635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.185 [2024-11-18 03:24:57.755659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.185 [2024-11-18 03:24:57.755667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:54.185 [2024-11-18 03:24:57.755676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.447 [2024-11-18 03:24:57.777029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.447 [2024-11-18 03:24:57.777076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:54.447 [2024-11-18 03:24:57.777087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.336 ms 00:24:54.447 [2024-11-18 03:24:57.777098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.447 [2024-11-18 03:24:57.784945] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:54.448 [2024-11-18 03:24:57.787784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.787817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.448 [2024-11-18 03:24:57.787834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.644 ms 00:24:54.448 [2024-11-18 03:24:57.787842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.787911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.787922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:54.448 [2024-11-18 03:24:57.787932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:54.448 [2024-11-18 03:24:57.787941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.789755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.789788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.448 [2024-11-18 03:24:57.789798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:24:54.448 [2024-11-18 03:24:57.789812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.789842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.789851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.448 [2024-11-18 03:24:57.789859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:54.448 [2024-11-18 03:24:57.789867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.789905] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:54.448 [2024-11-18 03:24:57.789916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.789925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:54.448 [2024-11-18 03:24:57.789934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:54.448 [2024-11-18 03:24:57.789942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.794759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.794794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.448 [2024-11-18 03:24:57.794805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.795 ms 00:24:54.448 [2024-11-18 03:24:57.794818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.794890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.448 [2024-11-18 03:24:57.794900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.448 [2024-11-18 03:24:57.794909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:54.448 [2024-11-18 03:24:57.794917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.448 [2024-11-18 03:24:57.796015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.593 ms, result 0 00:24:55.833  [2024-11-18T03:24:59.983Z] Copying: 1152/1048576 [kB] (1152 kBps) [2024-11-18T03:25:01.363Z] Copying: 4636/1048576 [kB] (3484 kBps) [2024-11-18T03:25:02.295Z] Copying: 15/1024 [MB] (10 MBps) [2024-11-18T03:25:03.228Z] Copying: 33/1024 [MB] (17 MBps) [2024-11-18T03:25:04.163Z] Copying: 50/1024 [MB] (17 MBps) [2024-11-18T03:25:05.170Z] Copying: 68/1024 [MB] (17 MBps) [2024-11-18T03:25:06.110Z] Copying: 86/1024 [MB] (17 MBps) [2024-11-18T03:25:07.044Z] Copying: 102/1024 [MB] (16 MBps) [2024-11-18T03:25:07.979Z] Copying: 120/1024 [MB] (17 MBps) [2024-11-18T03:25:09.355Z] Copying: 138/1024 [MB] (18 MBps) [2024-11-18T03:25:10.291Z] Copying: 156/1024 [MB] (18 MBps) [2024-11-18T03:25:11.231Z] Copying: 174/1024 [MB] (17 MBps) [2024-11-18T03:25:12.175Z] Copying: 190/1024 [MB] (16 MBps) [2024-11-18T03:25:13.115Z] Copying: 207/1024 [MB] (16 MBps) [2024-11-18T03:25:14.052Z] Copying: 223/1024 [MB] (16 MBps) [2024-11-18T03:25:14.987Z] Copying: 241/1024 [MB] (17 MBps) [2024-11-18T03:25:16.362Z] Copying: 259/1024 [MB] (18 MBps) [2024-11-18T03:25:17.297Z] Copying: 277/1024 [MB] (18 MBps) [2024-11-18T03:25:18.238Z] Copying: 295/1024 [MB] (18 MBps) [2024-11-18T03:25:19.174Z] Copying: 313/1024 [MB] (17 MBps) [2024-11-18T03:25:20.124Z] Copying: 331/1024 [MB] (17 MBps) [2024-11-18T03:25:21.060Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-18T03:25:21.996Z] Copying: 365/1024 [MB] (16 MBps) [2024-11-18T03:25:23.379Z] Copying: 383/1024 [MB] (17 MBps) [2024-11-18T03:25:24.316Z] Copying: 400/1024 [MB] (16 MBps) [2024-11-18T03:25:25.258Z] Copying: 418/1024 [MB] (17 MBps) [2024-11-18T03:25:26.193Z] Copying: 436/1024 [MB] (17 MBps) [2024-11-18T03:25:27.125Z] Copying: 456/1024 [MB] (20 MBps) [2024-11-18T03:25:28.056Z] Copying: 474/1024 [MB] (17 MBps) [2024-11-18T03:25:28.995Z] Copying: 494/1024 [MB] (19 MBps) [2024-11-18T03:25:30.383Z] Copying: 511/1024 [MB] (17 MBps) [2024-11-18T03:25:31.002Z] Copying: 527/1024 [MB] (16 MBps) [2024-11-18T03:25:32.395Z] Copying: 544/1024 [MB] (16 MBps) [2024-11-18T03:25:33.339Z] Copying: 567/1024 [MB] (22 MBps) [2024-11-18T03:25:34.282Z] Copying: 584/1024 [MB] (16 MBps) [2024-11-18T03:25:35.221Z] Copying: 604/1024 [MB] (20 MBps) [2024-11-18T03:25:36.160Z] Copying: 629/1024 [MB] (24 MBps) [2024-11-18T03:25:37.098Z] Copying: 657/1024 [MB] (27 MBps) [2024-11-18T03:25:38.034Z] Copying: 676/1024 [MB] (18 MBps) [2024-11-18T03:25:39.415Z] Copying: 703/1024 [MB] (27 MBps) [2024-11-18T03:25:39.986Z] Copying: 735/1024 [MB] (31 MBps) [2024-11-18T03:25:41.366Z] Copying: 774/1024 [MB] (39 MBps) [2024-11-18T03:25:42.304Z] Copying: 805/1024 [MB] (30 MBps) [2024-11-18T03:25:43.246Z] Copying: 830/1024 [MB] (24 MBps) [2024-11-18T03:25:44.186Z] Copying: 860/1024 [MB] (30 MBps) [2024-11-18T03:25:45.130Z] Copying: 887/1024 [MB] (26 MBps) [2024-11-18T03:25:46.074Z] Copying: 919/1024 [MB] (32 MBps) [2024-11-18T03:25:47.018Z] Copying: 950/1024 [MB] (30 MBps) [2024-11-18T03:25:48.405Z] Copying: 980/1024 [MB] (30 MBps) [2024-11-18T03:25:48.977Z] Copying: 998/1024 [MB] (18 MBps) [2024-11-18T03:25:49.238Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-18T03:25:50.182Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-18 03:25:50.088129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.088501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:46.605 [2024-11-18 03:25:50.088681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.605 [2024-11-18 03:25:50.088732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.088806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:46.605 [2024-11-18 03:25:50.089993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.090174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:46.605 [2024-11-18 03:25:50.090293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:25:46.605 [2024-11-18 03:25:50.090352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.090771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.090888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:46.605 [2024-11-18 03:25:50.090932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:25:46.605 [2024-11-18 03:25:50.091031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.107164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.107388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:46.605 [2024-11-18 03:25:50.107468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.075 ms 00:25:46.605 [2024-11-18 03:25:50.107508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.114622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.114815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:46.605 [2024-11-18 03:25:50.115005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.056 ms 00:25:46.605 [2024-11-18 03:25:50.115047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.117829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.117994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:46.605 [2024-11-18 03:25:50.118051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:25:46.605 [2024-11-18 03:25:50.118073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.122625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.122799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:46.605 [2024-11-18 03:25:50.122856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.428 ms 00:25:46.605 [2024-11-18 03:25:50.122887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.125050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.125218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:46.605 [2024-11-18 03:25:50.125282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:25:46.605 [2024-11-18 03:25:50.125481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.128016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.128173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:46.605 [2024-11-18 03:25:50.128232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:25:46.605 [2024-11-18 03:25:50.128613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.130786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.130967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:46.605 [2024-11-18 03:25:50.131020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:25:46.605 [2024-11-18 03:25:50.131041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.132688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.132837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:46.605 [2024-11-18 03:25:50.132891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:25:46.605 [2024-11-18 03:25:50.132913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.134484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.605 [2024-11-18 03:25:50.134663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:46.605 [2024-11-18 03:25:50.134718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:25:46.605 [2024-11-18 03:25:50.134728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.605 [2024-11-18 03:25:50.134760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:46.605 [2024-11-18 03:25:50.134776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:46.605 [2024-11-18 03:25:50.134796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:46.605 [2024-11-18 03:25:50.134805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:46.605 [2024-11-18 03:25:50.134926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.134993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:46.606 [2024-11-18 03:25:50.135606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:46.606 [2024-11-18 03:25:50.135615] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 87ba54cb-cd24-476c-9269-cc1d7eb66ebe 00:25:46.606 [2024-11-18 03:25:50.135623] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:46.606 [2024-11-18 03:25:50.135640] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160448 00:25:46.606 [2024-11-18 03:25:50.135648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158464 00:25:46.606 [2024-11-18 03:25:50.135667] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:25:46.606 [2024-11-18 03:25:50.135675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:46.606 [2024-11-18 03:25:50.135693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:46.606 [2024-11-18 03:25:50.135702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:46.607 [2024-11-18 03:25:50.135709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:46.607 [2024-11-18 03:25:50.135716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:46.607 [2024-11-18 03:25:50.135724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.607 [2024-11-18 03:25:50.135732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:46.607 [2024-11-18 03:25:50.135743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:25:46.607 [2024-11-18 03:25:50.135750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.138143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.607 [2024-11-18 03:25:50.138196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:46.607 [2024-11-18 03:25:50.138213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.373 ms 00:25:46.607 [2024-11-18 03:25:50.138223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.138391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.607 [2024-11-18 03:25:50.138403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:46.607 [2024-11-18 03:25:50.138413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:46.607 [2024-11-18 03:25:50.138425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.145348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.145399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.607 [2024-11-18 03:25:50.145410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.145418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.145476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.145485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.607 [2024-11-18 03:25:50.145493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.145505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.145552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.145562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.607 [2024-11-18 03:25:50.145571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.145578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.145593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.145602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.607 [2024-11-18 03:25:50.145610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.145617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.158834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.158891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.607 [2024-11-18 03:25:50.158903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.158913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.168967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.607 [2024-11-18 03:25:50.169032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.607 [2024-11-18 03:25:50.169115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.607 [2024-11-18 03:25:50.169178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.607 [2024-11-18 03:25:50.169275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:46.607 [2024-11-18 03:25:50.169386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.607 [2024-11-18 03:25:50.169455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.607 [2024-11-18 03:25:50.169535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.607 [2024-11-18 03:25:50.169544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.607 [2024-11-18 03:25:50.169552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.607 [2024-11-18 03:25:50.169683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.534 ms, result 0 00:25:46.869 00:25:46.869 00:25:46.869 03:25:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:49.420 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:49.420 03:25:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:49.420 [2024-11-18 03:25:52.671542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:49.420 [2024-11-18 03:25:52.671680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91545 ] 00:25:49.420 [2024-11-18 03:25:52.823462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.420 [2024-11-18 03:25:52.872729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.420 [2024-11-18 03:25:52.986872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.420 [2024-11-18 03:25:52.986953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.684 [2024-11-18 03:25:53.147563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.147624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.684 [2024-11-18 03:25:53.147642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:49.684 [2024-11-18 03:25:53.147651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.147710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.147721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.684 [2024-11-18 03:25:53.147730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:49.684 [2024-11-18 03:25:53.147737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.147766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.684 [2024-11-18 03:25:53.148159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.684 [2024-11-18 03:25:53.148207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.148217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.684 [2024-11-18 03:25:53.148227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:25:49.684 [2024-11-18 03:25:53.148239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.150036] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:49.684 [2024-11-18 03:25:53.153838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.153893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:49.684 [2024-11-18 03:25:53.153905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.804 ms 00:25:49.684 [2024-11-18 03:25:53.153914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.153996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.154009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:49.684 [2024-11-18 03:25:53.154018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:49.684 [2024-11-18 03:25:53.154026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.162178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.162222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.684 [2024-11-18 03:25:53.162233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.103 ms 00:25:49.684 [2024-11-18 03:25:53.162250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.162383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.162398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.684 [2024-11-18 03:25:53.162408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:25:49.684 [2024-11-18 03:25:53.162420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.162482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.162492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.684 [2024-11-18 03:25:53.162501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:49.684 [2024-11-18 03:25:53.162509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.162563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:49.684 [2024-11-18 03:25:53.164756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.164791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.684 [2024-11-18 03:25:53.164802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:25:49.684 [2024-11-18 03:25:53.164818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.164853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.164862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:49.684 [2024-11-18 03:25:53.164871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:49.684 [2024-11-18 03:25:53.164879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.164905] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:49.684 [2024-11-18 03:25:53.164928] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:49.684 [2024-11-18 03:25:53.164979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:49.684 [2024-11-18 03:25:53.165006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:49.684 [2024-11-18 03:25:53.165143] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:49.684 [2024-11-18 03:25:53.165160] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:49.684 [2024-11-18 03:25:53.165171] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:49.684 [2024-11-18 03:25:53.165183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:49.684 [2024-11-18 03:25:53.165196] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:49.684 [2024-11-18 03:25:53.165208] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:49.684 [2024-11-18 03:25:53.165220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:49.684 [2024-11-18 03:25:53.165227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:49.684 [2024-11-18 03:25:53.165235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:49.684 [2024-11-18 03:25:53.165247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.165254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:49.684 [2024-11-18 03:25:53.165267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:25:49.684 [2024-11-18 03:25:53.165276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.165378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.684 [2024-11-18 03:25:53.165390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:49.684 [2024-11-18 03:25:53.165398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:49.684 [2024-11-18 03:25:53.165406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.684 [2024-11-18 03:25:53.165506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:49.684 [2024-11-18 03:25:53.165516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:49.684 [2024-11-18 03:25:53.165526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.684 [2024-11-18 03:25:53.165541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.684 [2024-11-18 03:25:53.165550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:49.684 [2024-11-18 03:25:53.165560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:49.684 [2024-11-18 03:25:53.165569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:49.684 [2024-11-18 03:25:53.165578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:49.684 [2024-11-18 03:25:53.165587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:49.684 [2024-11-18 03:25:53.165595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.684 [2024-11-18 03:25:53.165608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:49.684 [2024-11-18 03:25:53.165616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:49.684 [2024-11-18 03:25:53.165624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.684 [2024-11-18 03:25:53.165632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:49.684 [2024-11-18 03:25:53.165641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:49.684 [2024-11-18 03:25:53.165649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.684 [2024-11-18 03:25:53.165657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:49.684 [2024-11-18 03:25:53.165664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:49.684 [2024-11-18 03:25:53.165672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.684 [2024-11-18 03:25:53.165681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:49.685 [2024-11-18 03:25:53.165688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.685 [2024-11-18 03:25:53.165705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:49.685 [2024-11-18 03:25:53.165713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.685 [2024-11-18 03:25:53.165729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:49.685 [2024-11-18 03:25:53.165743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.685 [2024-11-18 03:25:53.165758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:49.685 [2024-11-18 03:25:53.165766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.685 [2024-11-18 03:25:53.165781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:49.685 [2024-11-18 03:25:53.165789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.685 [2024-11-18 03:25:53.165804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:49.685 [2024-11-18 03:25:53.165811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:49.685 [2024-11-18 03:25:53.165818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.685 [2024-11-18 03:25:53.165828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:49.685 [2024-11-18 03:25:53.165836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:49.685 [2024-11-18 03:25:53.165844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:49.685 [2024-11-18 03:25:53.165860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:49.685 [2024-11-18 03:25:53.165870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165878] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:49.685 [2024-11-18 03:25:53.165887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:49.685 [2024-11-18 03:25:53.165896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.685 [2024-11-18 03:25:53.165907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.685 [2024-11-18 03:25:53.165915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:49.685 [2024-11-18 03:25:53.165922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:49.685 [2024-11-18 03:25:53.165929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:49.685 [2024-11-18 03:25:53.165936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:49.685 [2024-11-18 03:25:53.165942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:49.685 [2024-11-18 03:25:53.165949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:49.685 [2024-11-18 03:25:53.165957] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:49.685 [2024-11-18 03:25:53.165967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.165976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:49.685 [2024-11-18 03:25:53.165984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:49.685 [2024-11-18 03:25:53.165991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:49.685 [2024-11-18 03:25:53.166000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:49.685 [2024-11-18 03:25:53.166007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:49.685 [2024-11-18 03:25:53.166014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:49.685 [2024-11-18 03:25:53.166021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:49.685 [2024-11-18 03:25:53.166028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:49.685 [2024-11-18 03:25:53.166035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:49.685 [2024-11-18 03:25:53.166042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.166050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.166056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.166063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.166070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:49.685 [2024-11-18 03:25:53.166080] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:49.685 [2024-11-18 03:25:53.166090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.166098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:49.685 [2024-11-18 03:25:53.166105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:49.685 [2024-11-18 03:25:53.166113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:49.685 [2024-11-18 03:25:53.166122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:49.685 [2024-11-18 03:25:53.166130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.685 [2024-11-18 03:25:53.166138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:49.685 [2024-11-18 03:25:53.166145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:25:49.685 [2024-11-18 03:25:53.166152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.685 [2024-11-18 03:25:53.189456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.685 [2024-11-18 03:25:53.189531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.685 [2024-11-18 03:25:53.189553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.250 ms 00:25:49.685 [2024-11-18 03:25:53.189573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.685 [2024-11-18 03:25:53.189709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.685 [2024-11-18 03:25:53.189725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:49.685 [2024-11-18 03:25:53.189740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:49.685 [2024-11-18 03:25:53.189757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.685 [2024-11-18 03:25:53.201904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.685 [2024-11-18 03:25:53.201950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.685 [2024-11-18 03:25:53.201961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.059 ms 00:25:49.685 [2024-11-18 03:25:53.201969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.685 [2024-11-18 03:25:53.202010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.202023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:49.686 [2024-11-18 03:25:53.202032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:49.686 [2024-11-18 03:25:53.202043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.202637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.202680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:49.686 [2024-11-18 03:25:53.202696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:25:49.686 [2024-11-18 03:25:53.202705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.202853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.202864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:49.686 [2024-11-18 03:25:53.202874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:49.686 [2024-11-18 03:25:53.202882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.209370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.209553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:49.686 [2024-11-18 03:25:53.209569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.464 ms 00:25:49.686 [2024-11-18 03:25:53.209577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.213214] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:49.686 [2024-11-18 03:25:53.213265] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:49.686 [2024-11-18 03:25:53.213277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.213284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:49.686 [2024-11-18 03:25:53.213293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:25:49.686 [2024-11-18 03:25:53.213300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.229123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.229169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:49.686 [2024-11-18 03:25:53.229185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.749 ms 00:25:49.686 [2024-11-18 03:25:53.229193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.231713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.231759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:49.686 [2024-11-18 03:25:53.231769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:25:49.686 [2024-11-18 03:25:53.231777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.233876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.233918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:49.686 [2024-11-18 03:25:53.233928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:25:49.686 [2024-11-18 03:25:53.233936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.234294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.234335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:49.686 [2024-11-18 03:25:53.234346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:25:49.686 [2024-11-18 03:25:53.234354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.686 [2024-11-18 03:25:53.256427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.686 [2024-11-18 03:25:53.256494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:49.686 [2024-11-18 03:25:53.256507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.053 ms 00:25:49.686 [2024-11-18 03:25:53.256516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.264559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:49.948 [2024-11-18 03:25:53.267412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.267449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:49.948 [2024-11-18 03:25:53.267467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.847 ms 00:25:49.948 [2024-11-18 03:25:53.267478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.267553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.267564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:49.948 [2024-11-18 03:25:53.267574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:49.948 [2024-11-18 03:25:53.267581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.268357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.268401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:49.948 [2024-11-18 03:25:53.268411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:25:49.948 [2024-11-18 03:25:53.268422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.268448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.268458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:49.948 [2024-11-18 03:25:53.268466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:49.948 [2024-11-18 03:25:53.268474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.268517] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:49.948 [2024-11-18 03:25:53.268527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.268535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:49.948 [2024-11-18 03:25:53.268545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:49.948 [2024-11-18 03:25:53.268556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.273731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.273778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:49.948 [2024-11-18 03:25:53.273789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.154 ms 00:25:49.948 [2024-11-18 03:25:53.273797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.273879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.948 [2024-11-18 03:25:53.273888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:49.948 [2024-11-18 03:25:53.273897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:49.948 [2024-11-18 03:25:53.273910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.948 [2024-11-18 03:25:53.275013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.990 ms, result 0 00:25:50.893  [2024-11-18T03:25:55.860Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T03:25:56.806Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-18T03:25:57.749Z] Copying: 41/1024 [MB] (19 MBps) [2024-11-18T03:25:58.693Z] Copying: 54/1024 [MB] (12 MBps) [2024-11-18T03:25:59.638Z] Copying: 68/1024 [MB] (14 MBps) [2024-11-18T03:26:00.581Z] Copying: 83/1024 [MB] (14 MBps) [2024-11-18T03:26:01.526Z] Copying: 101/1024 [MB] (17 MBps) [2024-11-18T03:26:02.469Z] Copying: 119/1024 [MB] (17 MBps) [2024-11-18T03:26:03.857Z] Copying: 131/1024 [MB] (12 MBps) [2024-11-18T03:26:04.800Z] Copying: 149/1024 [MB] (18 MBps) [2024-11-18T03:26:05.776Z] Copying: 177/1024 [MB] (27 MBps) [2024-11-18T03:26:06.720Z] Copying: 194/1024 [MB] (16 MBps) [2024-11-18T03:26:07.663Z] Copying: 216/1024 [MB] (22 MBps) [2024-11-18T03:26:08.608Z] Copying: 235/1024 [MB] (18 MBps) [2024-11-18T03:26:09.554Z] Copying: 255/1024 [MB] (20 MBps) [2024-11-18T03:26:10.496Z] Copying: 277/1024 [MB] (22 MBps) [2024-11-18T03:26:11.884Z] Copying: 290/1024 [MB] (12 MBps) [2024-11-18T03:26:12.456Z] Copying: 301/1024 [MB] (11 MBps) [2024-11-18T03:26:13.843Z] Copying: 320/1024 [MB] (18 MBps) [2024-11-18T03:26:14.787Z] Copying: 334/1024 [MB] (14 MBps) [2024-11-18T03:26:15.730Z] Copying: 350/1024 [MB] (15 MBps) [2024-11-18T03:26:16.676Z] Copying: 367/1024 [MB] (17 MBps) [2024-11-18T03:26:17.621Z] Copying: 383/1024 [MB] (16 MBps) [2024-11-18T03:26:18.565Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-18T03:26:19.505Z] Copying: 404/1024 [MB] (10 MBps) [2024-11-18T03:26:20.888Z] Copying: 418/1024 [MB] (13 MBps) [2024-11-18T03:26:21.457Z] Copying: 437/1024 [MB] (18 MBps) [2024-11-18T03:26:22.841Z] Copying: 452/1024 [MB] (15 MBps) [2024-11-18T03:26:23.785Z] Copying: 465/1024 [MB] (13 MBps) [2024-11-18T03:26:24.727Z] Copying: 482/1024 [MB] (16 MBps) [2024-11-18T03:26:25.668Z] Copying: 494/1024 [MB] (12 MBps) [2024-11-18T03:26:26.608Z] Copying: 507/1024 [MB] (12 MBps) [2024-11-18T03:26:27.549Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-18T03:26:28.490Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-18T03:26:29.872Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-18T03:26:30.815Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-18T03:26:31.759Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-18T03:26:32.700Z] Copying: 575/1024 [MB] (13 MBps) [2024-11-18T03:26:33.642Z] Copying: 603/1024 [MB] (27 MBps) [2024-11-18T03:26:34.584Z] Copying: 618/1024 [MB] (15 MBps) [2024-11-18T03:26:35.529Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-18T03:26:36.549Z] Copying: 644/1024 [MB] (13 MBps) [2024-11-18T03:26:37.496Z] Copying: 658/1024 [MB] (13 MBps) [2024-11-18T03:26:38.885Z] Copying: 670/1024 [MB] (12 MBps) [2024-11-18T03:26:39.457Z] Copying: 690/1024 [MB] (19 MBps) [2024-11-18T03:26:40.842Z] Copying: 711/1024 [MB] (20 MBps) [2024-11-18T03:26:41.786Z] Copying: 733/1024 [MB] (22 MBps) [2024-11-18T03:26:42.730Z] Copying: 749/1024 [MB] (15 MBps) [2024-11-18T03:26:43.676Z] Copying: 772/1024 [MB] (23 MBps) [2024-11-18T03:26:44.619Z] Copying: 791/1024 [MB] (19 MBps) [2024-11-18T03:26:45.561Z] Copying: 810/1024 [MB] (18 MBps) [2024-11-18T03:26:46.505Z] Copying: 832/1024 [MB] (22 MBps) [2024-11-18T03:26:47.889Z] Copying: 854/1024 [MB] (21 MBps) [2024-11-18T03:26:48.463Z] Copying: 877/1024 [MB] (23 MBps) [2024-11-18T03:26:49.852Z] Copying: 898/1024 [MB] (20 MBps) [2024-11-18T03:26:50.796Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-18T03:26:51.745Z] Copying: 926/1024 [MB] (16 MBps) [2024-11-18T03:26:52.686Z] Copying: 940/1024 [MB] (13 MBps) [2024-11-18T03:26:53.627Z] Copying: 951/1024 [MB] (11 MBps) [2024-11-18T03:26:54.570Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-18T03:26:55.515Z] Copying: 980/1024 [MB] (17 MBps) [2024-11-18T03:26:56.460Z] Copying: 995/1024 [MB] (15 MBps) [2024-11-18T03:26:57.405Z] Copying: 1013/1024 [MB] (17 MBps) [2024-11-18T03:26:57.405Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 03:26:57.114500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.828 [2024-11-18 03:26:57.114622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:53.828 [2024-11-18 03:26:57.114640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:53.828 [2024-11-18 03:26:57.114649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.828 [2024-11-18 03:26:57.114676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:53.828 [2024-11-18 03:26:57.115869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.115920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:53.829 [2024-11-18 03:26:57.115938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:26:53.829 [2024-11-18 03:26:57.115951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.116467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.116497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:53.829 [2024-11-18 03:26:57.116523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:26:53.829 [2024-11-18 03:26:57.116536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.123574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.123635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:53.829 [2024-11-18 03:26:57.123652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.012 ms 00:26:53.829 [2024-11-18 03:26:57.123665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.131457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.131503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:53.829 [2024-11-18 03:26:57.131514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.761 ms 00:26:53.829 [2024-11-18 03:26:57.131521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.134308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.134376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:53.829 [2024-11-18 03:26:57.134387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:26:53.829 [2024-11-18 03:26:57.134394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.139375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.139432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:53.829 [2024-11-18 03:26:57.139443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.934 ms 00:26:53.829 [2024-11-18 03:26:57.139463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.143720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.143782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:53.829 [2024-11-18 03:26:57.143793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:26:53.829 [2024-11-18 03:26:57.143801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.147588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.147659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:53.829 [2024-11-18 03:26:57.147672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:26:53.829 [2024-11-18 03:26:57.147679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.151217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.151292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:53.829 [2024-11-18 03:26:57.151304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.464 ms 00:26:53.829 [2024-11-18 03:26:57.151328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.153843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.153898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:53.829 [2024-11-18 03:26:57.153908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:26:53.829 [2024-11-18 03:26:57.153916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.156393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.829 [2024-11-18 03:26:57.156447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:53.829 [2024-11-18 03:26:57.156458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:26:53.829 [2024-11-18 03:26:57.156465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.829 [2024-11-18 03:26:57.156510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:53.829 [2024-11-18 03:26:57.156535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:53.829 [2024-11-18 03:26:57.156545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:53.829 [2024-11-18 03:26:57.156554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:53.829 [2024-11-18 03:26:57.156935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.156994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:53.830 [2024-11-18 03:26:57.157347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:53.830 [2024-11-18 03:26:57.157357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 87ba54cb-cd24-476c-9269-cc1d7eb66ebe 00:26:53.830 [2024-11-18 03:26:57.157368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:53.830 [2024-11-18 03:26:57.157376] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:53.830 [2024-11-18 03:26:57.157384] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:53.830 [2024-11-18 03:26:57.157393] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:53.830 [2024-11-18 03:26:57.157408] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:53.830 [2024-11-18 03:26:57.157416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:53.830 [2024-11-18 03:26:57.157424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:53.830 [2024-11-18 03:26:57.157430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:53.830 [2024-11-18 03:26:57.157437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:53.830 [2024-11-18 03:26:57.157445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.830 [2024-11-18 03:26:57.157453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:53.830 [2024-11-18 03:26:57.157472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:26:53.830 [2024-11-18 03:26:57.157480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.159912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.830 [2024-11-18 03:26:57.159957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:53.830 [2024-11-18 03:26:57.159968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.406 ms 00:26:53.830 [2024-11-18 03:26:57.159977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.160106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.830 [2024-11-18 03:26:57.160115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:53.830 [2024-11-18 03:26:57.160126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:26:53.830 [2024-11-18 03:26:57.160133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.167293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.167391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.830 [2024-11-18 03:26:57.167402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.830 [2024-11-18 03:26:57.167410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.167471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.167480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.830 [2024-11-18 03:26:57.167489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.830 [2024-11-18 03:26:57.167497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.167566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.167577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.830 [2024-11-18 03:26:57.167585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.830 [2024-11-18 03:26:57.167592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.167607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.167619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.830 [2024-11-18 03:26:57.167627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.830 [2024-11-18 03:26:57.167634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.181541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.181600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.830 [2024-11-18 03:26:57.181612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.830 [2024-11-18 03:26:57.181625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.191901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.191954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.830 [2024-11-18 03:26:57.191967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.830 [2024-11-18 03:26:57.191975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.830 [2024-11-18 03:26:57.192022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.830 [2024-11-18 03:26:57.192031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.831 [2024-11-18 03:26:57.192040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.831 [2024-11-18 03:26:57.192058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.831 [2024-11-18 03:26:57.192094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.831 [2024-11-18 03:26:57.192103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.831 [2024-11-18 03:26:57.192114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.831 [2024-11-18 03:26:57.192123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.831 [2024-11-18 03:26:57.192191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.831 [2024-11-18 03:26:57.192201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.831 [2024-11-18 03:26:57.192214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.831 [2024-11-18 03:26:57.192222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.831 [2024-11-18 03:26:57.192249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.831 [2024-11-18 03:26:57.192259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:53.831 [2024-11-18 03:26:57.192268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.831 [2024-11-18 03:26:57.192279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.831 [2024-11-18 03:26:57.192335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.831 [2024-11-18 03:26:57.192345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.831 [2024-11-18 03:26:57.192353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.831 [2024-11-18 03:26:57.192361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.831 [2024-11-18 03:26:57.192409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.831 [2024-11-18 03:26:57.192424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.831 [2024-11-18 03:26:57.192435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.831 [2024-11-18 03:26:57.192443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.831 [2024-11-18 03:26:57.192573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.040 ms, result 0 00:26:54.092 00:26:54.092 00:26:54.092 03:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:56.638 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89336 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89336 ']' 00:26:56.638 Process with pid 89336 is not found 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89336 00:26:56.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89336) - No such process 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89336 is not found' 00:26:56.638 03:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:56.638 03:27:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:56.638 Remove shared memory files 00:26:56.638 03:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:56.639 ************************************ 00:26:56.639 END TEST ftl_dirty_shutdown 00:26:56.639 ************************************ 00:26:56.639 00:26:56.639 real 4m34.753s 00:26:56.639 user 4m49.989s 00:26:56.639 sys 0m25.315s 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:56.639 03:27:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:56.639 03:27:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:56.639 03:27:00 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:56.639 03:27:00 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:56.639 03:27:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:56.639 ************************************ 00:26:56.639 START TEST ftl_upgrade_shutdown 00:26:56.639 ************************************ 00:26:56.639 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:56.901 * Looking for test storage... 00:26:56.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:56.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.901 --rc genhtml_branch_coverage=1 00:26:56.901 --rc genhtml_function_coverage=1 00:26:56.901 --rc genhtml_legend=1 00:26:56.901 --rc geninfo_all_blocks=1 00:26:56.901 --rc geninfo_unexecuted_blocks=1 00:26:56.901 00:26:56.901 ' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:56.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.901 --rc genhtml_branch_coverage=1 00:26:56.901 --rc genhtml_function_coverage=1 00:26:56.901 --rc genhtml_legend=1 00:26:56.901 --rc geninfo_all_blocks=1 00:26:56.901 --rc geninfo_unexecuted_blocks=1 00:26:56.901 00:26:56.901 ' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:56.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.901 --rc genhtml_branch_coverage=1 00:26:56.901 --rc genhtml_function_coverage=1 00:26:56.901 --rc genhtml_legend=1 00:26:56.901 --rc geninfo_all_blocks=1 00:26:56.901 --rc geninfo_unexecuted_blocks=1 00:26:56.901 00:26:56.901 ' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:56.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.901 --rc genhtml_branch_coverage=1 00:26:56.901 --rc genhtml_function_coverage=1 00:26:56.901 --rc genhtml_legend=1 00:26:56.901 --rc geninfo_all_blocks=1 00:26:56.901 --rc geninfo_unexecuted_blocks=1 00:26:56.901 00:26:56.901 ' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.901 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92304 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92304 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92304 ']' 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.902 03:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:56.902 [2024-11-18 03:27:00.458770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:56.902 [2024-11-18 03:27:00.458911] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92304 ] 00:26:57.164 [2024-11-18 03:27:00.611833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.164 [2024-11-18 03:27:00.663616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:58.106 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:58.367 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:58.367 { 00:26:58.367 "name": "basen1", 00:26:58.367 "aliases": [ 00:26:58.367 "d71bb820-98d0-45e6-b05b-f9b4f9344a93" 00:26:58.367 ], 00:26:58.367 "product_name": "NVMe disk", 00:26:58.367 "block_size": 4096, 00:26:58.367 "num_blocks": 1310720, 00:26:58.367 "uuid": "d71bb820-98d0-45e6-b05b-f9b4f9344a93", 00:26:58.367 "numa_id": -1, 00:26:58.367 "assigned_rate_limits": { 00:26:58.367 "rw_ios_per_sec": 0, 00:26:58.367 "rw_mbytes_per_sec": 0, 00:26:58.367 "r_mbytes_per_sec": 0, 00:26:58.367 "w_mbytes_per_sec": 0 00:26:58.367 }, 00:26:58.367 "claimed": true, 00:26:58.367 "claim_type": "read_many_write_one", 00:26:58.367 "zoned": false, 00:26:58.367 "supported_io_types": { 00:26:58.367 "read": true, 00:26:58.367 "write": true, 00:26:58.367 "unmap": true, 00:26:58.367 "flush": true, 00:26:58.367 "reset": true, 00:26:58.367 "nvme_admin": true, 00:26:58.367 "nvme_io": true, 00:26:58.367 "nvme_io_md": false, 00:26:58.367 "write_zeroes": true, 00:26:58.367 "zcopy": false, 00:26:58.367 "get_zone_info": false, 00:26:58.367 "zone_management": false, 00:26:58.367 "zone_append": false, 00:26:58.367 "compare": true, 00:26:58.367 "compare_and_write": false, 00:26:58.367 "abort": true, 00:26:58.367 "seek_hole": false, 00:26:58.367 "seek_data": false, 00:26:58.367 "copy": true, 00:26:58.367 "nvme_iov_md": false 00:26:58.367 }, 00:26:58.367 "driver_specific": { 00:26:58.367 "nvme": [ 00:26:58.367 { 00:26:58.367 "pci_address": "0000:00:11.0", 00:26:58.367 "trid": { 00:26:58.367 "trtype": "PCIe", 00:26:58.367 "traddr": "0000:00:11.0" 00:26:58.367 }, 00:26:58.367 "ctrlr_data": { 00:26:58.367 "cntlid": 0, 00:26:58.367 "vendor_id": "0x1b36", 00:26:58.367 "model_number": "QEMU NVMe Ctrl", 00:26:58.367 "serial_number": "12341", 00:26:58.367 "firmware_revision": "8.0.0", 00:26:58.367 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:58.367 "oacs": { 00:26:58.367 "security": 0, 00:26:58.367 "format": 1, 00:26:58.367 "firmware": 0, 00:26:58.367 "ns_manage": 1 00:26:58.367 }, 00:26:58.367 "multi_ctrlr": false, 00:26:58.367 "ana_reporting": false 00:26:58.367 }, 00:26:58.368 "vs": { 00:26:58.368 "nvme_version": "1.4" 00:26:58.368 }, 00:26:58.368 "ns_data": { 00:26:58.368 "id": 1, 00:26:58.368 "can_share": false 00:26:58.368 } 00:26:58.368 } 00:26:58.368 ], 00:26:58.368 "mp_policy": "active_passive" 00:26:58.368 } 00:26:58.368 } 00:26:58.368 ]' 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:58.368 03:27:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:58.630 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=3d55b11a-56aa-4c4e-afff-07c51208a5ca 00:26:58.630 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:58.630 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d55b11a-56aa-4c4e-afff-07c51208a5ca 00:26:58.890 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:59.151 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f000fcfb-1cf3-4205-af4a-b15fa79d35e2 00:26:59.151 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f000fcfb-1cf3-4205-af4a-b15fa79d35e2 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4f84801f-a0d0-44b3-84cd-b3677981122a 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4f84801f-a0d0-44b3-84cd-b3677981122a ]] 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4f84801f-a0d0-44b3-84cd-b3677981122a 5120 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4f84801f-a0d0-44b3-84cd-b3677981122a 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4f84801f-a0d0-44b3-84cd-b3677981122a 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4f84801f-a0d0-44b3-84cd-b3677981122a 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:59.413 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f84801f-a0d0-44b3-84cd-b3677981122a 00:26:59.675 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:59.675 { 00:26:59.675 "name": "4f84801f-a0d0-44b3-84cd-b3677981122a", 00:26:59.675 "aliases": [ 00:26:59.675 "lvs/basen1p0" 00:26:59.675 ], 00:26:59.675 "product_name": "Logical Volume", 00:26:59.675 "block_size": 4096, 00:26:59.675 "num_blocks": 5242880, 00:26:59.675 "uuid": "4f84801f-a0d0-44b3-84cd-b3677981122a", 00:26:59.675 "assigned_rate_limits": { 00:26:59.675 "rw_ios_per_sec": 0, 00:26:59.675 "rw_mbytes_per_sec": 0, 00:26:59.675 "r_mbytes_per_sec": 0, 00:26:59.675 "w_mbytes_per_sec": 0 00:26:59.675 }, 00:26:59.675 "claimed": false, 00:26:59.675 "zoned": false, 00:26:59.675 "supported_io_types": { 00:26:59.675 "read": true, 00:26:59.675 "write": true, 00:26:59.675 "unmap": true, 00:26:59.675 "flush": false, 00:26:59.675 "reset": true, 00:26:59.675 "nvme_admin": false, 00:26:59.675 "nvme_io": false, 00:26:59.675 "nvme_io_md": false, 00:26:59.675 "write_zeroes": true, 00:26:59.675 "zcopy": false, 00:26:59.675 "get_zone_info": false, 00:26:59.675 "zone_management": false, 00:26:59.675 "zone_append": false, 00:26:59.675 "compare": false, 00:26:59.675 "compare_and_write": false, 00:26:59.675 "abort": false, 00:26:59.675 "seek_hole": true, 00:26:59.675 "seek_data": true, 00:26:59.675 "copy": false, 00:26:59.675 "nvme_iov_md": false 00:26:59.675 }, 00:26:59.675 "driver_specific": { 00:26:59.675 "lvol": { 00:26:59.675 "lvol_store_uuid": "f000fcfb-1cf3-4205-af4a-b15fa79d35e2", 00:26:59.675 "base_bdev": "basen1", 00:26:59.675 "thin_provision": true, 00:26:59.675 "num_allocated_clusters": 0, 00:26:59.675 "snapshot": false, 00:26:59.675 "clone": false, 00:26:59.675 "esnap_clone": false 00:26:59.675 } 00:26:59.675 } 00:26:59.675 } 00:26:59.675 ]' 00:26:59.675 03:27:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:59.675 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:59.936 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:59.936 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:59.936 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:00.198 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:00.198 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:00.198 03:27:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4f84801f-a0d0-44b3-84cd-b3677981122a -c cachen1p0 --l2p_dram_limit 2 00:27:00.198 [2024-11-18 03:27:03.711634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.711674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:00.198 [2024-11-18 03:27:03.711684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:00.198 [2024-11-18 03:27:03.711692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.711738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.711747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:00.198 [2024-11-18 03:27:03.711753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:00.198 [2024-11-18 03:27:03.711763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.711779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:00.198 [2024-11-18 03:27:03.711991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:00.198 [2024-11-18 03:27:03.712002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.712011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:00.198 [2024-11-18 03:27:03.712020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:27:00.198 [2024-11-18 03:27:03.712028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.712052] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID ba78d9b1-83f2-4bbc-b9f5-228c3b2e7e48 00:27:00.198 [2024-11-18 03:27:03.713029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.713049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:00.198 [2024-11-18 03:27:03.713058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:00.198 [2024-11-18 03:27:03.713065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.718038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.718060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:00.198 [2024-11-18 03:27:03.718070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.931 ms 00:27:00.198 [2024-11-18 03:27:03.718077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.718147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.718154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:00.198 [2024-11-18 03:27:03.718163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:00.198 [2024-11-18 03:27:03.718172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.718209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.718217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:00.198 [2024-11-18 03:27:03.718225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:00.198 [2024-11-18 03:27:03.718231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.718249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:00.198 [2024-11-18 03:27:03.719565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.719587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:00.198 [2024-11-18 03:27:03.719596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.322 ms 00:27:00.198 [2024-11-18 03:27:03.719606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.719625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.719633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:00.198 [2024-11-18 03:27:03.719639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:00.198 [2024-11-18 03:27:03.719648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.719669] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:00.198 [2024-11-18 03:27:03.719774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:00.198 [2024-11-18 03:27:03.719784] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:00.198 [2024-11-18 03:27:03.719794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:00.198 [2024-11-18 03:27:03.719802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:00.198 [2024-11-18 03:27:03.719815] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:00.198 [2024-11-18 03:27:03.719821] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:00.198 [2024-11-18 03:27:03.719831] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:00.198 [2024-11-18 03:27:03.719838] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:00.198 [2024-11-18 03:27:03.719845] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:00.198 [2024-11-18 03:27:03.719852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.719859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:00.198 [2024-11-18 03:27:03.719865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:27:00.198 [2024-11-18 03:27:03.719872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.719938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.198 [2024-11-18 03:27:03.719948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:00.198 [2024-11-18 03:27:03.719954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:00.198 [2024-11-18 03:27:03.719961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.198 [2024-11-18 03:27:03.720032] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:00.198 [2024-11-18 03:27:03.720043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:00.198 [2024-11-18 03:27:03.720052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:00.198 [2024-11-18 03:27:03.720060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.198 [2024-11-18 03:27:03.720068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:00.198 [2024-11-18 03:27:03.720075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:00.198 [2024-11-18 03:27:03.720080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:00.198 [2024-11-18 03:27:03.720089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:00.198 [2024-11-18 03:27:03.720094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:00.198 [2024-11-18 03:27:03.720101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.198 [2024-11-18 03:27:03.720106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:00.198 [2024-11-18 03:27:03.720113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:00.198 [2024-11-18 03:27:03.720118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.198 [2024-11-18 03:27:03.720127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:00.199 [2024-11-18 03:27:03.720133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:00.199 [2024-11-18 03:27:03.720139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:00.199 [2024-11-18 03:27:03.720152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:00.199 [2024-11-18 03:27:03.720157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:00.199 [2024-11-18 03:27:03.720169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:00.199 [2024-11-18 03:27:03.720175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:00.199 [2024-11-18 03:27:03.720187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:00.199 [2024-11-18 03:27:03.720192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:00.199 [2024-11-18 03:27:03.720203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:00.199 [2024-11-18 03:27:03.720209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:00.199 [2024-11-18 03:27:03.720223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:00.199 [2024-11-18 03:27:03.720229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:00.199 [2024-11-18 03:27:03.720243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:00.199 [2024-11-18 03:27:03.720252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:00.199 [2024-11-18 03:27:03.720265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:00.199 [2024-11-18 03:27:03.720283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:00.199 [2024-11-18 03:27:03.720303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:00.199 [2024-11-18 03:27:03.720309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:00.199 [2024-11-18 03:27:03.720340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:00.199 [2024-11-18 03:27:03.720349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.199 [2024-11-18 03:27:03.720364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:00.199 [2024-11-18 03:27:03.720371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:00.199 [2024-11-18 03:27:03.720379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:00.199 [2024-11-18 03:27:03.720385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:00.199 [2024-11-18 03:27:03.720392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:00.199 [2024-11-18 03:27:03.720398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:00.199 [2024-11-18 03:27:03.720408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:00.199 [2024-11-18 03:27:03.720414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:00.199 [2024-11-18 03:27:03.720428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:00.199 [2024-11-18 03:27:03.720449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:00.199 [2024-11-18 03:27:03.720454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:00.199 [2024-11-18 03:27:03.720462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:00.199 [2024-11-18 03:27:03.720468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:00.199 [2024-11-18 03:27:03.720510] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:00.199 [2024-11-18 03:27:03.720518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:00.199 [2024-11-18 03:27:03.720531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:00.199 [2024-11-18 03:27:03.720537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:00.199 [2024-11-18 03:27:03.720543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:00.199 [2024-11-18 03:27:03.720550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.199 [2024-11-18 03:27:03.720555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:00.199 [2024-11-18 03:27:03.720564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:27:00.199 [2024-11-18 03:27:03.720569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.199 [2024-11-18 03:27:03.720598] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:00.199 [2024-11-18 03:27:03.720605] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:05.489 [2024-11-18 03:27:08.066080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.066156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:05.489 [2024-11-18 03:27:08.066180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4345.456 ms 00:27:05.489 [2024-11-18 03:27:08.066189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.079646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.079696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:05.489 [2024-11-18 03:27:08.079718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.335 ms 00:27:05.489 [2024-11-18 03:27:08.079733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.079784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.079794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:05.489 [2024-11-18 03:27:08.079809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:05.489 [2024-11-18 03:27:08.079818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.091436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.091478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:05.489 [2024-11-18 03:27:08.091500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.541 ms 00:27:05.489 [2024-11-18 03:27:08.091509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.091545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.091557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:05.489 [2024-11-18 03:27:08.091569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:05.489 [2024-11-18 03:27:08.091577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.092090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.092126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:05.489 [2024-11-18 03:27:08.092140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:27:05.489 [2024-11-18 03:27:08.092150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.092200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.092211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:05.489 [2024-11-18 03:27:08.092227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:05.489 [2024-11-18 03:27:08.092236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.110631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.110686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:05.489 [2024-11-18 03:27:08.110708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.361 ms 00:27:05.489 [2024-11-18 03:27:08.110721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.122308] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:05.489 [2024-11-18 03:27:08.123568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.123616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:05.489 [2024-11-18 03:27:08.123628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.711 ms 00:27:05.489 [2024-11-18 03:27:08.123640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.146014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.146065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:05.489 [2024-11-18 03:27:08.146078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.338 ms 00:27:05.489 [2024-11-18 03:27:08.146098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.146205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.146219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:05.489 [2024-11-18 03:27:08.146232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:05.489 [2024-11-18 03:27:08.146243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.151296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.151360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:05.489 [2024-11-18 03:27:08.151371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.031 ms 00:27:05.489 [2024-11-18 03:27:08.151382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.156353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.156399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:05.489 [2024-11-18 03:27:08.156410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.919 ms 00:27:05.489 [2024-11-18 03:27:08.156420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.156742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.156766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:05.489 [2024-11-18 03:27:08.156777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:27:05.489 [2024-11-18 03:27:08.156791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.207269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.207334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:05.489 [2024-11-18 03:27:08.207347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.439 ms 00:27:05.489 [2024-11-18 03:27:08.207359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.214172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.214220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:05.489 [2024-11-18 03:27:08.214232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.736 ms 00:27:05.489 [2024-11-18 03:27:08.214244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.220008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.220055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:05.489 [2024-11-18 03:27:08.220066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.718 ms 00:27:05.489 [2024-11-18 03:27:08.220076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.226330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.226374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:05.489 [2024-11-18 03:27:08.226384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.204 ms 00:27:05.489 [2024-11-18 03:27:08.226397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.226449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.226462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:05.489 [2024-11-18 03:27:08.226471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:05.489 [2024-11-18 03:27:08.226481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.226553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.489 [2024-11-18 03:27:08.226597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:05.489 [2024-11-18 03:27:08.226606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:05.489 [2024-11-18 03:27:08.226616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.489 [2024-11-18 03:27:08.227821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4515.688 ms, result 0 00:27:05.489 { 00:27:05.489 "name": "ftl", 00:27:05.489 "uuid": "ba78d9b1-83f2-4bbc-b9f5-228c3b2e7e48" 00:27:05.489 } 00:27:05.489 03:27:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:05.489 [2024-11-18 03:27:08.441445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.489 03:27:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:05.489 03:27:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:05.489 [2024-11-18 03:27:08.865897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:05.489 03:27:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:05.750 [2024-11-18 03:27:09.082333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:05.750 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:06.011 Fill FTL, iteration 1 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92432 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92432 /var/tmp/spdk.tgt.sock 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92432 ']' 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:06.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.011 03:27:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.011 [2024-11-18 03:27:09.529797] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:06.011 [2024-11-18 03:27:09.530223] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92432 ] 00:27:06.273 [2024-11-18 03:27:09.680643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.273 [2024-11-18 03:27:09.753723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.902 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.902 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:06.902 03:27:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:07.163 ftln1 00:27:07.163 03:27:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:07.163 03:27:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92432 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92432 ']' 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92432 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92432 00:27:07.425 killing process with pid 92432 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92432' 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92432 00:27:07.425 03:27:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92432 00:27:07.685 03:27:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:07.685 03:27:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:07.945 [2024-11-18 03:27:11.272490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:07.945 [2024-11-18 03:27:11.272596] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92468 ] 00:27:07.945 [2024-11-18 03:27:11.428321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.945 [2024-11-18 03:27:11.469337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.332  [2024-11-18T03:27:13.844Z] Copying: 174/1024 [MB] (174 MBps) [2024-11-18T03:27:14.780Z] Copying: 394/1024 [MB] (220 MBps) [2024-11-18T03:27:15.715Z] Copying: 628/1024 [MB] (234 MBps) [2024-11-18T03:27:16.281Z] Copying: 890/1024 [MB] (262 MBps) [2024-11-18T03:27:16.542Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:27:12.965 00:27:12.965 Calculate MD5 checksum, iteration 1 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:12.965 03:27:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:12.965 [2024-11-18 03:27:16.502140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:12.965 [2024-11-18 03:27:16.502591] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92522 ] 00:27:13.224 [2024-11-18 03:27:16.649392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.224 [2024-11-18 03:27:16.689149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.598  [2024-11-18T03:27:18.745Z] Copying: 654/1024 [MB] (654 MBps) [2024-11-18T03:27:18.745Z] Copying: 1024/1024 [MB] (average 639 MBps) 00:27:15.168 00:27:15.168 03:27:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:15.168 03:27:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:17.702 Fill FTL, iteration 2 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=377d4be233ccc95c5bc4b2977ea223fc 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:17.702 03:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:17.702 [2024-11-18 03:27:20.933254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:17.702 [2024-11-18 03:27:20.933696] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92574 ] 00:27:17.702 [2024-11-18 03:27:21.081018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.702 [2024-11-18 03:27:21.121628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.078  [2024-11-18T03:27:23.591Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-18T03:27:24.525Z] Copying: 488/1024 [MB] (241 MBps) [2024-11-18T03:27:25.459Z] Copying: 721/1024 [MB] (233 MBps) [2024-11-18T03:27:25.717Z] Copying: 954/1024 [MB] (233 MBps) [2024-11-18T03:27:25.978Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:27:22.401 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:22.401 Calculate MD5 checksum, iteration 2 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:22.401 03:27:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:22.401 [2024-11-18 03:27:25.871951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:22.401 [2024-11-18 03:27:25.872276] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92628 ] 00:27:22.661 [2024-11-18 03:27:26.021407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.661 [2024-11-18 03:27:26.082338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.035  [2024-11-18T03:27:28.179Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-18T03:27:28.750Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:27:25.173 00:27:25.173 03:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:25.173 03:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:27.719 03:27:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:27.719 03:27:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b0455b940a475651cc038226b68bf4d7 00:27:27.719 03:27:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:27.719 03:27:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:27.719 03:27:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:27.719 [2024-11-18 03:27:31.086434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.719 [2024-11-18 03:27:31.086470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.719 [2024-11-18 03:27:31.086482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:27.719 [2024-11-18 03:27:31.086488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.719 [2024-11-18 03:27:31.086507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.719 [2024-11-18 03:27:31.086513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.719 [2024-11-18 03:27:31.086522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.719 [2024-11-18 03:27:31.086528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.719 [2024-11-18 03:27:31.086543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.719 [2024-11-18 03:27:31.086549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.719 [2024-11-18 03:27:31.086555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.719 [2024-11-18 03:27:31.086561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.719 [2024-11-18 03:27:31.086623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.177 ms, result 0 00:27:27.719 true 00:27:27.719 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:27.719 { 00:27:27.719 "name": "ftl", 00:27:27.719 "properties": [ 00:27:27.719 { 00:27:27.719 "name": "superblock_version", 00:27:27.719 "value": 5, 00:27:27.719 "read-only": true 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "name": "base_device", 00:27:27.719 "bands": [ 00:27:27.719 { 00:27:27.719 "id": 0, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 1, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 2, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 3, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 4, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 5, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 6, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 7, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.719 { 00:27:27.719 "id": 8, 00:27:27.719 "state": "FREE", 00:27:27.719 "validity": 0.0 00:27:27.719 }, 00:27:27.720 { 00:27:27.720 "id": 9, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 10, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 11, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 12, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 13, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 14, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 15, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 16, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 17, 00:27:27.720 "state": "FREE", 00:27:27.720 "validity": 0.0 00:27:27.720 } 00:27:27.720 ], 00:27:27.720 "read-only": true 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "name": "cache_device", 00:27:27.720 "type": "bdev", 00:27:27.720 "chunks": [ 00:27:27.720 { 00:27:27.720 "id": 0, 00:27:27.720 "state": "INACTIVE", 00:27:27.720 "utilization": 0.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 1, 00:27:27.720 "state": "CLOSED", 00:27:27.720 "utilization": 1.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 2, 00:27:27.720 "state": "CLOSED", 00:27:27.720 "utilization": 1.0 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 3, 00:27:27.720 "state": "OPEN", 00:27:27.720 "utilization": 0.001953125 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "id": 4, 00:27:27.720 "state": "OPEN", 00:27:27.720 "utilization": 0.0 00:27:27.720 } 00:27:27.720 ], 00:27:27.720 "read-only": true 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "name": "verbose_mode", 00:27:27.720 "value": true, 00:27:27.720 "unit": "", 00:27:27.720 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:27.720 }, 00:27:27.720 { 00:27:27.720 "name": "prep_upgrade_on_shutdown", 00:27:27.720 "value": false, 00:27:27.720 "unit": "", 00:27:27.720 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:27.720 } 00:27:27.720 ] 00:27:27.720 } 00:27:27.720 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:27.980 [2024-11-18 03:27:31.446718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.980 [2024-11-18 03:27:31.446841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.980 [2024-11-18 03:27:31.446890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:27.980 [2024-11-18 03:27:31.446908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.981 [2024-11-18 03:27:31.446940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.981 [2024-11-18 03:27:31.446956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.981 [2024-11-18 03:27:31.446971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:27.981 [2024-11-18 03:27:31.446987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.981 [2024-11-18 03:27:31.447010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.981 [2024-11-18 03:27:31.447121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.981 [2024-11-18 03:27:31.447189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.981 [2024-11-18 03:27:31.447203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.981 [2024-11-18 03:27:31.447256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.525 ms, result 0 00:27:27.981 true 00:27:27.981 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:27.981 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:27.981 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:28.241 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:28.241 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:28.241 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:28.503 [2024-11-18 03:27:31.863071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.503 [2024-11-18 03:27:31.863099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:28.503 [2024-11-18 03:27:31.863107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:28.503 [2024-11-18 03:27:31.863113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.503 [2024-11-18 03:27:31.863129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.503 [2024-11-18 03:27:31.863135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:28.503 [2024-11-18 03:27:31.863142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:28.503 [2024-11-18 03:27:31.863147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.503 [2024-11-18 03:27:31.863161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.503 [2024-11-18 03:27:31.863167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:28.503 [2024-11-18 03:27:31.863173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:28.503 [2024-11-18 03:27:31.863178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.503 [2024-11-18 03:27:31.863217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.138 ms, result 0 00:27:28.503 true 00:27:28.503 03:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:28.503 { 00:27:28.503 "name": "ftl", 00:27:28.503 "properties": [ 00:27:28.503 { 00:27:28.503 "name": "superblock_version", 00:27:28.503 "value": 5, 00:27:28.503 "read-only": true 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "name": "base_device", 00:27:28.503 "bands": [ 00:27:28.503 { 00:27:28.503 "id": 0, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 1, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 2, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 3, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 4, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 5, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 6, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 7, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 8, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 9, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 10, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 11, 00:27:28.503 "state": "FREE", 00:27:28.503 "validity": 0.0 00:27:28.503 }, 00:27:28.503 { 00:27:28.503 "id": 12, 00:27:28.503 "state": "FREE", 00:27:28.504 "validity": 0.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 13, 00:27:28.504 "state": "FREE", 00:27:28.504 "validity": 0.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 14, 00:27:28.504 "state": "FREE", 00:27:28.504 "validity": 0.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 15, 00:27:28.504 "state": "FREE", 00:27:28.504 "validity": 0.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 16, 00:27:28.504 "state": "FREE", 00:27:28.504 "validity": 0.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 17, 00:27:28.504 "state": "FREE", 00:27:28.504 "validity": 0.0 00:27:28.504 } 00:27:28.504 ], 00:27:28.504 "read-only": true 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "name": "cache_device", 00:27:28.504 "type": "bdev", 00:27:28.504 "chunks": [ 00:27:28.504 { 00:27:28.504 "id": 0, 00:27:28.504 "state": "INACTIVE", 00:27:28.504 "utilization": 0.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 1, 00:27:28.504 "state": "CLOSED", 00:27:28.504 "utilization": 1.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 2, 00:27:28.504 "state": "CLOSED", 00:27:28.504 "utilization": 1.0 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 3, 00:27:28.504 "state": "OPEN", 00:27:28.504 "utilization": 0.001953125 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "id": 4, 00:27:28.504 "state": "OPEN", 00:27:28.504 "utilization": 0.0 00:27:28.504 } 00:27:28.504 ], 00:27:28.504 "read-only": true 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "name": "verbose_mode", 00:27:28.504 "value": true, 00:27:28.504 "unit": "", 00:27:28.504 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:28.504 }, 00:27:28.504 { 00:27:28.504 "name": "prep_upgrade_on_shutdown", 00:27:28.504 "value": true, 00:27:28.504 "unit": "", 00:27:28.504 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:28.504 } 00:27:28.504 ] 00:27:28.504 } 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92304 ]] 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92304 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92304 ']' 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92304 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92304 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:28.765 killing process with pid 92304 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92304' 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92304 00:27:28.765 03:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92304 00:27:28.765 [2024-11-18 03:27:32.184790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:28.765 [2024-11-18 03:27:32.188621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.765 [2024-11-18 03:27:32.188650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:28.765 [2024-11-18 03:27:32.188659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:28.765 [2024-11-18 03:27:32.188665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.765 [2024-11-18 03:27:32.188690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:28.765 [2024-11-18 03:27:32.189055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.765 [2024-11-18 03:27:32.189069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:28.765 [2024-11-18 03:27:32.189076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:27:28.765 [2024-11-18 03:27:32.189083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.284394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.284493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:38.777 [2024-11-18 03:27:41.284527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9095.249 ms 00:27:38.777 [2024-11-18 03:27:41.284538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.286353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.286394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:38.777 [2024-11-18 03:27:41.286407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.796 ms 00:27:38.777 [2024-11-18 03:27:41.286417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.287578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.287613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:38.777 [2024-11-18 03:27:41.287624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.123 ms 00:27:38.777 [2024-11-18 03:27:41.287643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.291137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.291193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:38.777 [2024-11-18 03:27:41.291206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.453 ms 00:27:38.777 [2024-11-18 03:27:41.291217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.295009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.295064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:38.777 [2024-11-18 03:27:41.295077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.743 ms 00:27:38.777 [2024-11-18 03:27:41.295087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.295175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.295186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:38.777 [2024-11-18 03:27:41.295208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:38.777 [2024-11-18 03:27:41.295227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.297790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.297840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:38.777 [2024-11-18 03:27:41.297851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.543 ms 00:27:38.777 [2024-11-18 03:27:41.297860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.300432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.300680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:38.777 [2024-11-18 03:27:41.300700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.509 ms 00:27:38.777 [2024-11-18 03:27:41.300710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.303249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.303302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:38.777 [2024-11-18 03:27:41.303326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.410 ms 00:27:38.777 [2024-11-18 03:27:41.303335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.305690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.777 [2024-11-18 03:27:41.305738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:38.777 [2024-11-18 03:27:41.305749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.268 ms 00:27:38.777 [2024-11-18 03:27:41.305757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.777 [2024-11-18 03:27:41.305802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:38.777 [2024-11-18 03:27:41.305820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:38.777 [2024-11-18 03:27:41.305833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:38.777 [2024-11-18 03:27:41.305842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:38.777 [2024-11-18 03:27:41.305851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:38.777 [2024-11-18 03:27:41.305862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:38.777 [2024-11-18 03:27:41.305870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:38.778 [2024-11-18 03:27:41.305989] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:38.778 [2024-11-18 03:27:41.305998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ba78d9b1-83f2-4bbc-b9f5-228c3b2e7e48 00:27:38.778 [2024-11-18 03:27:41.306008] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:38.778 [2024-11-18 03:27:41.306020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:38.778 [2024-11-18 03:27:41.306030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:38.778 [2024-11-18 03:27:41.306039] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:38.778 [2024-11-18 03:27:41.306047] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:38.778 [2024-11-18 03:27:41.306067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:38.778 [2024-11-18 03:27:41.306077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:38.778 [2024-11-18 03:27:41.306084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:38.778 [2024-11-18 03:27:41.306093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:38.778 [2024-11-18 03:27:41.306103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.778 [2024-11-18 03:27:41.306113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:38.778 [2024-11-18 03:27:41.306122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:27:38.778 [2024-11-18 03:27:41.306132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.309356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.778 [2024-11-18 03:27:41.309397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:38.778 [2024-11-18 03:27:41.309420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.205 ms 00:27:38.778 [2024-11-18 03:27:41.309433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.309589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.778 [2024-11-18 03:27:41.309600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:38.778 [2024-11-18 03:27:41.309610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:27:38.778 [2024-11-18 03:27:41.309617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.320422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.320467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:38.778 [2024-11-18 03:27:41.320487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.320496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.320533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.320543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:38.778 [2024-11-18 03:27:41.320552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.320561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.320639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.320653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:38.778 [2024-11-18 03:27:41.320663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.320676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.320699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.320708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:38.778 [2024-11-18 03:27:41.320717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.320725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.340179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.340529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:38.778 [2024-11-18 03:27:41.340562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.340573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.356276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.356552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:38.778 [2024-11-18 03:27:41.356574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.356584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.356680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.356693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:38.778 [2024-11-18 03:27:41.356717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.356727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.356782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.356794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:38.778 [2024-11-18 03:27:41.356805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.356814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.356905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.356916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:38.778 [2024-11-18 03:27:41.356926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.356936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.356979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.356996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:38.778 [2024-11-18 03:27:41.357005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.357014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.357066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.357078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:38.778 [2024-11-18 03:27:41.357089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.357100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.357169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:38.778 [2024-11-18 03:27:41.357184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:38.778 [2024-11-18 03:27:41.357194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:38.778 [2024-11-18 03:27:41.357204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.778 [2024-11-18 03:27:41.357422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9168.664 ms, result 0 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:42.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92820 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92820 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92820 ']' 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:42.084 03:27:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:42.344 [2024-11-18 03:27:45.693523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:42.344 [2024-11-18 03:27:45.695943] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92820 ] 00:27:42.344 [2024-11-18 03:27:45.847891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.344 [2024-11-18 03:27:45.897296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.918 [2024-11-18 03:27:46.206906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.918 [2024-11-18 03:27:46.206981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:42.918 [2024-11-18 03:27:46.359534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.359596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:42.918 [2024-11-18 03:27:46.359622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:42.918 [2024-11-18 03:27:46.359632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.359713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.359727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:42.918 [2024-11-18 03:27:46.359740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:42.918 [2024-11-18 03:27:46.359750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.359782] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:42.918 [2024-11-18 03:27:46.360074] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:42.918 [2024-11-18 03:27:46.360092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.360101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:42.918 [2024-11-18 03:27:46.360118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:27:42.918 [2024-11-18 03:27:46.360127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.362394] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:42.918 [2024-11-18 03:27:46.367557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.367607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:42.918 [2024-11-18 03:27:46.367621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.165 ms 00:27:42.918 [2024-11-18 03:27:46.367638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.367717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.367727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:42.918 [2024-11-18 03:27:46.367738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:42.918 [2024-11-18 03:27:46.367746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.379140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.379186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:42.918 [2024-11-18 03:27:46.379202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.322 ms 00:27:42.918 [2024-11-18 03:27:46.379210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.379264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.379273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:42.918 [2024-11-18 03:27:46.379283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:42.918 [2024-11-18 03:27:46.379291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.379384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.379399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:42.918 [2024-11-18 03:27:46.379407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:42.918 [2024-11-18 03:27:46.379425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.379457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:42.918 [2024-11-18 03:27:46.382180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.382418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:42.918 [2024-11-18 03:27:46.382439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.733 ms 00:27:42.918 [2024-11-18 03:27:46.382451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.382490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.382500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:42.918 [2024-11-18 03:27:46.382511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:42.918 [2024-11-18 03:27:46.382526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.382554] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:42.918 [2024-11-18 03:27:46.382597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:42.918 [2024-11-18 03:27:46.382641] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:42.918 [2024-11-18 03:27:46.382659] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:42.918 [2024-11-18 03:27:46.382773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:42.918 [2024-11-18 03:27:46.382796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:42.918 [2024-11-18 03:27:46.382811] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:42.918 [2024-11-18 03:27:46.382826] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:42.918 [2024-11-18 03:27:46.382837] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:42.918 [2024-11-18 03:27:46.382846] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:42.918 [2024-11-18 03:27:46.382854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:42.918 [2024-11-18 03:27:46.382862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:42.918 [2024-11-18 03:27:46.382871] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:42.918 [2024-11-18 03:27:46.382882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.382890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:42.918 [2024-11-18 03:27:46.382902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:27:42.918 [2024-11-18 03:27:46.382912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.918 [2024-11-18 03:27:46.383002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.918 [2024-11-18 03:27:46.383014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:42.919 [2024-11-18 03:27:46.383023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:27:42.919 [2024-11-18 03:27:46.383031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.919 [2024-11-18 03:27:46.383141] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:42.919 [2024-11-18 03:27:46.383156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:42.919 [2024-11-18 03:27:46.383166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:42.919 [2024-11-18 03:27:46.383194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:42.919 [2024-11-18 03:27:46.383212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:42.919 [2024-11-18 03:27:46.383224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:42.919 [2024-11-18 03:27:46.383233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:42.919 [2024-11-18 03:27:46.383250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:42.919 [2024-11-18 03:27:46.383258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:42.919 [2024-11-18 03:27:46.383286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:42.919 [2024-11-18 03:27:46.383300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:42.919 [2024-11-18 03:27:46.383340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:42.919 [2024-11-18 03:27:46.383348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:42.919 [2024-11-18 03:27:46.383364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:42.919 [2024-11-18 03:27:46.383372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:42.919 [2024-11-18 03:27:46.383388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:42.919 [2024-11-18 03:27:46.383395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:42.919 [2024-11-18 03:27:46.383410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:42.919 [2024-11-18 03:27:46.383417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:42.919 [2024-11-18 03:27:46.383432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:42.919 [2024-11-18 03:27:46.383439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:42.919 [2024-11-18 03:27:46.383458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:42.919 [2024-11-18 03:27:46.383465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:42.919 [2024-11-18 03:27:46.383480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:42.919 [2024-11-18 03:27:46.383501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:42.919 [2024-11-18 03:27:46.383523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:42.919 [2024-11-18 03:27:46.383530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383536] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:42.919 [2024-11-18 03:27:46.383548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:42.919 [2024-11-18 03:27:46.383556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:42.919 [2024-11-18 03:27:46.383581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:42.919 [2024-11-18 03:27:46.383592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:42.919 [2024-11-18 03:27:46.383600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:42.919 [2024-11-18 03:27:46.383606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:42.919 [2024-11-18 03:27:46.383615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:42.919 [2024-11-18 03:27:46.383623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:42.919 [2024-11-18 03:27:46.383632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:42.919 [2024-11-18 03:27:46.383642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:42.919 [2024-11-18 03:27:46.383659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:42.919 [2024-11-18 03:27:46.383683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:42.919 [2024-11-18 03:27:46.383690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:42.919 [2024-11-18 03:27:46.383697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:42.919 [2024-11-18 03:27:46.383704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:42.919 [2024-11-18 03:27:46.383763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:42.919 [2024-11-18 03:27:46.383772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.919 [2024-11-18 03:27:46.383793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:42.919 [2024-11-18 03:27:46.383801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:42.919 [2024-11-18 03:27:46.383810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:42.919 [2024-11-18 03:27:46.383818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:42.919 [2024-11-18 03:27:46.383826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:42.919 [2024-11-18 03:27:46.383835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.750 ms 00:27:42.919 [2024-11-18 03:27:46.383844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:42.919 [2024-11-18 03:27:46.383910] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:42.919 [2024-11-18 03:27:46.383927] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:47.223 [2024-11-18 03:27:50.282887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.283061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:47.223 [2024-11-18 03:27:50.283116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3898.962 ms 00:27:47.223 [2024-11-18 03:27:50.283143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.293373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.293500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:47.223 [2024-11-18 03:27:50.293547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.140 ms 00:27:47.223 [2024-11-18 03:27:50.293566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.293647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.293667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:47.223 [2024-11-18 03:27:50.293683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:47.223 [2024-11-18 03:27:50.293699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.311813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.311972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:47.223 [2024-11-18 03:27:50.312035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.072 ms 00:27:47.223 [2024-11-18 03:27:50.312060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.312120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.312145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:47.223 [2024-11-18 03:27:50.312165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:47.223 [2024-11-18 03:27:50.312184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.312665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.312717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:47.223 [2024-11-18 03:27:50.312739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:27:47.223 [2024-11-18 03:27:50.312758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.312824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.312846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:47.223 [2024-11-18 03:27:50.312918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:47.223 [2024-11-18 03:27:50.312942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.319951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.320072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:47.223 [2024-11-18 03:27:50.320129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.974 ms 00:27:47.223 [2024-11-18 03:27:50.320154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.323474] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:47.223 [2024-11-18 03:27:50.323611] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:47.223 [2024-11-18 03:27:50.323681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.323705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:47.223 [2024-11-18 03:27:50.323727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.421 ms 00:27:47.223 [2024-11-18 03:27:50.323748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.328106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.328220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:47.223 [2024-11-18 03:27:50.328280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.309 ms 00:27:47.223 [2024-11-18 03:27:50.328305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.330155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.330262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:47.223 [2024-11-18 03:27:50.330326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.758 ms 00:27:47.223 [2024-11-18 03:27:50.330352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.331735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.331820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:47.223 [2024-11-18 03:27:50.331859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.338 ms 00:27:47.223 [2024-11-18 03:27:50.331876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.332136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.332165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:47.223 [2024-11-18 03:27:50.332241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:27:47.223 [2024-11-18 03:27:50.332260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.350750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.350859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:47.223 [2024-11-18 03:27:50.350911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.464 ms 00:27:47.223 [2024-11-18 03:27:50.350929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.357419] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:47.223 [2024-11-18 03:27:50.358219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.358300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:47.223 [2024-11-18 03:27:50.358355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.251 ms 00:27:47.223 [2024-11-18 03:27:50.358380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.358437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.358459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:47.223 [2024-11-18 03:27:50.358477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:47.223 [2024-11-18 03:27:50.358499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.358595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.358697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:47.223 [2024-11-18 03:27:50.358716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:47.223 [2024-11-18 03:27:50.358732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.358770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.358787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:47.223 [2024-11-18 03:27:50.359148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:47.223 [2024-11-18 03:27:50.359184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.359283] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:47.223 [2024-11-18 03:27:50.359332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.359377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:47.223 [2024-11-18 03:27:50.359397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:47.223 [2024-11-18 03:27:50.359413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.362892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.362992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:47.223 [2024-11-18 03:27:50.363033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.447 ms 00:27:47.223 [2024-11-18 03:27:50.363050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.223 [2024-11-18 03:27:50.363119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.223 [2024-11-18 03:27:50.363140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:47.223 [2024-11-18 03:27:50.363156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:47.224 [2024-11-18 03:27:50.363171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.224 [2024-11-18 03:27:50.364070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4004.176 ms, result 0 00:27:47.224 [2024-11-18 03:27:50.379294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.224 [2024-11-18 03:27:50.395287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:47.224 [2024-11-18 03:27:50.403415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:47.224 03:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:47.224 03:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:47.224 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:47.224 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:47.224 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:47.224 [2024-11-18 03:27:50.631451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.224 [2024-11-18 03:27:50.631485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:47.224 [2024-11-18 03:27:50.631496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:47.224 [2024-11-18 03:27:50.631507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.224 [2024-11-18 03:27:50.631524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.224 [2024-11-18 03:27:50.631531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:47.224 [2024-11-18 03:27:50.631538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:47.224 [2024-11-18 03:27:50.631544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.224 [2024-11-18 03:27:50.631564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.224 [2024-11-18 03:27:50.631571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:47.224 [2024-11-18 03:27:50.631578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:47.224 [2024-11-18 03:27:50.631587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.224 [2024-11-18 03:27:50.631632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.179 ms, result 0 00:27:47.224 true 00:27:47.224 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:47.482 { 00:27:47.482 "name": "ftl", 00:27:47.482 "properties": [ 00:27:47.482 { 00:27:47.482 "name": "superblock_version", 00:27:47.482 "value": 5, 00:27:47.482 "read-only": true 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "name": "base_device", 00:27:47.482 "bands": [ 00:27:47.482 { 00:27:47.482 "id": 0, 00:27:47.482 "state": "CLOSED", 00:27:47.482 "validity": 1.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 1, 00:27:47.482 "state": "CLOSED", 00:27:47.482 "validity": 1.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 2, 00:27:47.482 "state": "CLOSED", 00:27:47.482 "validity": 0.007843137254901933 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 3, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 4, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 5, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 6, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 7, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 8, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 9, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 10, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 11, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 12, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 13, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 14, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 15, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 16, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 17, 00:27:47.482 "state": "FREE", 00:27:47.482 "validity": 0.0 00:27:47.482 } 00:27:47.482 ], 00:27:47.482 "read-only": true 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "name": "cache_device", 00:27:47.482 "type": "bdev", 00:27:47.482 "chunks": [ 00:27:47.482 { 00:27:47.482 "id": 0, 00:27:47.482 "state": "INACTIVE", 00:27:47.482 "utilization": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 1, 00:27:47.482 "state": "OPEN", 00:27:47.482 "utilization": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 2, 00:27:47.482 "state": "OPEN", 00:27:47.482 "utilization": 0.0 00:27:47.482 }, 00:27:47.482 { 00:27:47.482 "id": 3, 00:27:47.482 "state": "FREE", 00:27:47.482 "utilization": 0.0 00:27:47.482 }, 00:27:47.483 { 00:27:47.483 "id": 4, 00:27:47.483 "state": "FREE", 00:27:47.483 "utilization": 0.0 00:27:47.483 } 00:27:47.483 ], 00:27:47.483 "read-only": true 00:27:47.483 }, 00:27:47.483 { 00:27:47.483 "name": "verbose_mode", 00:27:47.483 "value": true, 00:27:47.483 "unit": "", 00:27:47.483 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:47.483 }, 00:27:47.483 { 00:27:47.483 "name": "prep_upgrade_on_shutdown", 00:27:47.483 "value": false, 00:27:47.483 "unit": "", 00:27:47.483 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:47.483 } 00:27:47.483 ] 00:27:47.483 } 00:27:47.483 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:47.483 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:47.483 03:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:47.483 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:47.483 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:47.483 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:47.483 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:47.483 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:47.742 Validate MD5 checksum, iteration 1 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:47.742 03:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.742 [2024-11-18 03:27:51.306432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:47.742 [2024-11-18 03:27:51.307069] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92889 ] 00:27:48.000 [2024-11-18 03:27:51.460915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.000 [2024-11-18 03:27:51.494644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.377  [2024-11-18T03:27:53.895Z] Copying: 658/1024 [MB] (658 MBps) [2024-11-18T03:27:54.461Z] Copying: 1024/1024 [MB] (average 583 MBps) 00:27:50.884 00:27:50.884 03:27:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:50.884 03:27:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:53.413 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:53.413 Validate MD5 checksum, iteration 2 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=377d4be233ccc95c5bc4b2977ea223fc 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 377d4be233ccc95c5bc4b2977ea223fc != \3\7\7\d\4\b\e\2\3\3\c\c\c\9\5\c\5\b\c\4\b\2\9\7\7\e\a\2\2\3\f\c ]] 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:53.414 03:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:53.414 [2024-11-18 03:27:56.524190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:53.414 [2024-11-18 03:27:56.524309] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92951 ] 00:27:53.414 [2024-11-18 03:27:56.665956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.414 [2024-11-18 03:27:56.698190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.799  [2024-11-18T03:27:59.317Z] Copying: 516/1024 [MB] (516 MBps) [2024-11-18T03:27:59.317Z] Copying: 987/1024 [MB] (471 MBps) [2024-11-18T03:27:59.577Z] Copying: 1024/1024 [MB] (average 494 MBps) 00:27:56.000 00:27:56.000 03:27:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:56.000 03:27:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b0455b940a475651cc038226b68bf4d7 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b0455b940a475651cc038226b68bf4d7 != \b\0\4\5\5\b\9\4\0\a\4\7\5\6\5\1\c\c\0\3\8\2\2\6\b\6\8\b\f\4\d\7 ]] 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92820 ]] 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92820 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93007 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:57.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93007 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93007 ']' 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.902 03:28:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:58.162 [2024-11-18 03:28:01.531027] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:58.162 [2024-11-18 03:28:01.531144] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93007 ] 00:27:58.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92820 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:58.162 [2024-11-18 03:28:01.677703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.162 [2024-11-18 03:28:01.720991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.730 [2024-11-18 03:28:02.026905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:58.730 [2024-11-18 03:28:02.026958] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:58.730 [2024-11-18 03:28:02.173124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.173159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:58.730 [2024-11-18 03:28:02.173173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:58.730 [2024-11-18 03:28:02.173182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.173225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.173234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:58.730 [2024-11-18 03:28:02.173240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:58.730 [2024-11-18 03:28:02.173250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.173269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:58.730 [2024-11-18 03:28:02.173468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:58.730 [2024-11-18 03:28:02.173481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.173487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:58.730 [2024-11-18 03:28:02.173497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.218 ms 00:27:58.730 [2024-11-18 03:28:02.173503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.173962] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:58.730 [2024-11-18 03:28:02.177856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.177886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:58.730 [2024-11-18 03:28:02.177896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.895 ms 00:27:58.730 [2024-11-18 03:28:02.177910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.178835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.178863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:58.730 [2024-11-18 03:28:02.178872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:58.730 [2024-11-18 03:28:02.178878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.179089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.179098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:58.730 [2024-11-18 03:28:02.179107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.172 ms 00:27:58.730 [2024-11-18 03:28:02.179112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.179141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.179148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:58.730 [2024-11-18 03:28:02.179154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:58.730 [2024-11-18 03:28:02.179161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.179180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.179187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:58.730 [2024-11-18 03:28:02.179193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:58.730 [2024-11-18 03:28:02.179204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.179220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:58.730 [2024-11-18 03:28:02.179933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.179951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:58.730 [2024-11-18 03:28:02.179964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:27:58.730 [2024-11-18 03:28:02.179970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.179990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.179997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:58.730 [2024-11-18 03:28:02.180003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:58.730 [2024-11-18 03:28:02.180013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.180030] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:58.730 [2024-11-18 03:28:02.180046] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:58.730 [2024-11-18 03:28:02.180075] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:58.730 [2024-11-18 03:28:02.180087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:58.730 [2024-11-18 03:28:02.180172] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:58.730 [2024-11-18 03:28:02.180184] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:58.730 [2024-11-18 03:28:02.180194] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:58.730 [2024-11-18 03:28:02.180203] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:58.730 [2024-11-18 03:28:02.180209] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:58.730 [2024-11-18 03:28:02.180216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:58.730 [2024-11-18 03:28:02.180222] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:58.730 [2024-11-18 03:28:02.180230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:58.730 [2024-11-18 03:28:02.180236] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:58.730 [2024-11-18 03:28:02.180243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.180249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:58.730 [2024-11-18 03:28:02.180258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.215 ms 00:27:58.730 [2024-11-18 03:28:02.180264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.180344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.730 [2024-11-18 03:28:02.180353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:58.730 [2024-11-18 03:28:02.180360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:27:58.730 [2024-11-18 03:28:02.180367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.730 [2024-11-18 03:28:02.180446] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:58.730 [2024-11-18 03:28:02.180455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:58.731 [2024-11-18 03:28:02.180461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:58.731 [2024-11-18 03:28:02.180478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:58.731 [2024-11-18 03:28:02.180488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:58.731 [2024-11-18 03:28:02.180493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:58.731 [2024-11-18 03:28:02.180498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:58.731 [2024-11-18 03:28:02.180510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:58.731 [2024-11-18 03:28:02.180521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:58.731 [2024-11-18 03:28:02.180531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:58.731 [2024-11-18 03:28:02.180540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:58.731 [2024-11-18 03:28:02.180550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:58.731 [2024-11-18 03:28:02.180556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:58.731 [2024-11-18 03:28:02.180568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:58.731 [2024-11-18 03:28:02.180574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:58.731 [2024-11-18 03:28:02.180585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:58.731 [2024-11-18 03:28:02.180591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:58.731 [2024-11-18 03:28:02.180603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:58.731 [2024-11-18 03:28:02.180610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:58.731 [2024-11-18 03:28:02.180622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:58.731 [2024-11-18 03:28:02.180627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:58.731 [2024-11-18 03:28:02.180643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:58.731 [2024-11-18 03:28:02.180648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:58.731 [2024-11-18 03:28:02.180660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:58.731 [2024-11-18 03:28:02.180678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:58.731 [2024-11-18 03:28:02.180695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:58.731 [2024-11-18 03:28:02.180703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:58.731 [2024-11-18 03:28:02.180721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:58.731 [2024-11-18 03:28:02.180727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:58.731 [2024-11-18 03:28:02.180743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:58.731 [2024-11-18 03:28:02.180749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:58.731 [2024-11-18 03:28:02.180755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:58.731 [2024-11-18 03:28:02.180762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:58.731 [2024-11-18 03:28:02.180768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:58.731 [2024-11-18 03:28:02.180774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:58.731 [2024-11-18 03:28:02.180781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:58.731 [2024-11-18 03:28:02.180789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:58.731 [2024-11-18 03:28:02.180805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:58.731 [2024-11-18 03:28:02.180824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:58.731 [2024-11-18 03:28:02.180830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:58.731 [2024-11-18 03:28:02.180837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:58.731 [2024-11-18 03:28:02.180843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:58.731 [2024-11-18 03:28:02.180889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:58.731 [2024-11-18 03:28:02.180896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.731 [2024-11-18 03:28:02.180909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:58.731 [2024-11-18 03:28:02.180915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:58.731 [2024-11-18 03:28:02.180923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:58.731 [2024-11-18 03:28:02.180931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.180938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:58.731 [2024-11-18 03:28:02.180944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:27:58.731 [2024-11-18 03:28:02.180953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.731 [2024-11-18 03:28:02.189113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.189266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.731 [2024-11-18 03:28:02.189279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.111 ms 00:27:58.731 [2024-11-18 03:28:02.189287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.731 [2024-11-18 03:28:02.189332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.189345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:58.731 [2024-11-18 03:28:02.189352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:58.731 [2024-11-18 03:28:02.189359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.731 [2024-11-18 03:28:02.205786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.205819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.731 [2024-11-18 03:28:02.205834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.384 ms 00:27:58.731 [2024-11-18 03:28:02.205840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.731 [2024-11-18 03:28:02.205871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.205878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.731 [2024-11-18 03:28:02.205888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:58.731 [2024-11-18 03:28:02.205894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.731 [2024-11-18 03:28:02.205979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.205989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.731 [2024-11-18 03:28:02.205996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:58.731 [2024-11-18 03:28:02.206004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.731 [2024-11-18 03:28:02.206040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.731 [2024-11-18 03:28:02.206046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.731 [2024-11-18 03:28:02.206053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:58.732 [2024-11-18 03:28:02.206061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.212712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.212894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.732 [2024-11-18 03:28:02.212920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.633 ms 00:27:58.732 [2024-11-18 03:28:02.212931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.213035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.213049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:58.732 [2024-11-18 03:28:02.213059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:58.732 [2024-11-18 03:28:02.213073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.217705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.217836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:58.732 [2024-11-18 03:28:02.217907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.607 ms 00:27:58.732 [2024-11-18 03:28:02.217920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.219514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.219543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.732 [2024-11-18 03:28:02.219554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:27:58.732 [2024-11-18 03:28:02.219569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.236744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.236788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:58.732 [2024-11-18 03:28:02.236799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.127 ms 00:27:58.732 [2024-11-18 03:28:02.236809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.236915] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:58.732 [2024-11-18 03:28:02.237002] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:58.732 [2024-11-18 03:28:02.237086] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:58.732 [2024-11-18 03:28:02.237170] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:58.732 [2024-11-18 03:28:02.237188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.237195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:58.732 [2024-11-18 03:28:02.237202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:27:58.732 [2024-11-18 03:28:02.237209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.237238] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:58.732 [2024-11-18 03:28:02.237251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.237257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:58.732 [2024-11-18 03:28:02.237264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:58.732 [2024-11-18 03:28:02.237269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.240060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.240181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:58.732 [2024-11-18 03:28:02.240195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.774 ms 00:27:58.732 [2024-11-18 03:28:02.240202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.240753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.240777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:58.732 [2024-11-18 03:28:02.240786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:58.732 [2024-11-18 03:28:02.240792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.732 [2024-11-18 03:28:02.240834] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:58.732 [2024-11-18 03:28:02.240991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.732 [2024-11-18 03:28:02.241004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:58.732 [2024-11-18 03:28:02.241014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.158 ms 00:27:58.732 [2024-11-18 03:28:02.241021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.298 [2024-11-18 03:28:02.740464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.298 [2024-11-18 03:28:02.740498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:59.298 [2024-11-18 03:28:02.740507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 499.204 ms 00:27:59.298 [2024-11-18 03:28:02.740514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.298 [2024-11-18 03:28:02.741934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.298 [2024-11-18 03:28:02.741962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:59.298 [2024-11-18 03:28:02.741971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.150 ms 00:27:59.298 [2024-11-18 03:28:02.741977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.298 [2024-11-18 03:28:02.742426] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:59.298 [2024-11-18 03:28:02.742461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.298 [2024-11-18 03:28:02.742469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:59.298 [2024-11-18 03:28:02.742476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.459 ms 00:27:59.298 [2024-11-18 03:28:02.742483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.298 [2024-11-18 03:28:02.742508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.298 [2024-11-18 03:28:02.742516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:59.298 [2024-11-18 03:28:02.742523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:59.298 [2024-11-18 03:28:02.742532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.298 [2024-11-18 03:28:02.742563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 501.723 ms, result 0 00:27:59.298 [2024-11-18 03:28:02.742604] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:59.298 [2024-11-18 03:28:02.742742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.298 [2024-11-18 03:28:02.742751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:59.298 [2024-11-18 03:28:02.742758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.138 ms 00:27:59.298 [2024-11-18 03:28:02.742764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.414130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.414401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:59.872 [2024-11-18 03:28:03.414427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 671.103 ms 00:27:59.872 [2024-11-18 03:28:03.414437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.416470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.416520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:59.872 [2024-11-18 03:28:03.416532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.455 ms 00:27:59.872 [2024-11-18 03:28:03.416540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.417299] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:59.872 [2024-11-18 03:28:03.417372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.417382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:59.872 [2024-11-18 03:28:03.417392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.797 ms 00:27:59.872 [2024-11-18 03:28:03.417400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.417530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.417558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:59.872 [2024-11-18 03:28:03.417570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:59.872 [2024-11-18 03:28:03.417580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.417627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 675.009 ms, result 0 00:27:59.872 [2024-11-18 03:28:03.417677] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:59.872 [2024-11-18 03:28:03.417689] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.872 [2024-11-18 03:28:03.417699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.417709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:59.872 [2024-11-18 03:28:03.417717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1176.876 ms 00:27:59.872 [2024-11-18 03:28:03.417726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.417776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.417793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:59.872 [2024-11-18 03:28:03.417802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.872 [2024-11-18 03:28:03.417813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.427080] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.872 [2024-11-18 03:28:03.427375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.427423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.872 [2024-11-18 03:28:03.427612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.542 ms 00:27:59.872 [2024-11-18 03:28:03.427644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.428452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.428586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:59.872 [2024-11-18 03:28:03.428649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.656 ms 00:27:59.872 [2024-11-18 03:28:03.428675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.431155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.431290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:59.872 [2024-11-18 03:28:03.431735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.438 ms 00:27:59.872 [2024-11-18 03:28:03.431777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.431882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.431896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:59.872 [2024-11-18 03:28:03.431907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:59.872 [2024-11-18 03:28:03.431916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.432040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.432053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.872 [2024-11-18 03:28:03.432061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:59.872 [2024-11-18 03:28:03.432069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.432101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.432113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.872 [2024-11-18 03:28:03.432121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:59.872 [2024-11-18 03:28:03.432129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.432167] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.872 [2024-11-18 03:28:03.432183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.432193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.872 [2024-11-18 03:28:03.432201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:59.872 [2024-11-18 03:28:03.432209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.432270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.872 [2024-11-18 03:28:03.432283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.872 [2024-11-18 03:28:03.432292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:59.872 [2024-11-18 03:28:03.432301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.872 [2024-11-18 03:28:03.434285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1260.598 ms, result 0 00:28:00.134 [2024-11-18 03:28:03.448162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.134 [2024-11-18 03:28:03.464163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:00.134 [2024-11-18 03:28:03.472359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:00.702 Validate MD5 checksum, iteration 1 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:00.702 03:28:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:00.702 [2024-11-18 03:28:04.078648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:00.702 [2024-11-18 03:28:04.078895] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93042 ] 00:28:00.702 [2024-11-18 03:28:04.226866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.702 [2024-11-18 03:28:04.259861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.081  [2024-11-18T03:28:06.230Z] Copying: 748/1024 [MB] (748 MBps) [2024-11-18T03:28:06.800Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:28:03.223 00:28:03.223 03:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:03.223 03:28:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=377d4be233ccc95c5bc4b2977ea223fc 00:28:05.839 Validate MD5 checksum, iteration 2 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 377d4be233ccc95c5bc4b2977ea223fc != \3\7\7\d\4\b\e\2\3\3\c\c\c\9\5\c\5\b\c\4\b\2\9\7\7\e\a\2\2\3\f\c ]] 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.839 03:28:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.839 [2024-11-18 03:28:08.833645] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:05.839 [2024-11-18 03:28:08.833849] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93092 ] 00:28:05.839 [2024-11-18 03:28:08.973727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.839 [2024-11-18 03:28:09.006444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.782  [2024-11-18T03:28:11.300Z] Copying: 511/1024 [MB] (511 MBps) [2024-11-18T03:28:15.492Z] Copying: 1024/1024 [MB] (average 547 MBps) 00:28:11.915 00:28:12.173 03:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:12.173 03:28:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b0455b940a475651cc038226b68bf4d7 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b0455b940a475651cc038226b68bf4d7 != \b\0\4\5\5\b\9\4\0\a\4\7\5\6\5\1\c\c\0\3\8\2\2\6\b\6\8\b\f\4\d\7 ]] 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93007 ]] 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93007 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93007 ']' 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93007 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93007 00:28:14.073 killing process with pid 93007 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93007' 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93007 00:28:14.073 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93007 00:28:14.333 [2024-11-18 03:28:17.735656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:14.333 [2024-11-18 03:28:17.739656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.739690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:14.333 [2024-11-18 03:28:17.739702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:14.333 [2024-11-18 03:28:17.739708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.739728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:14.333 [2024-11-18 03:28:17.740235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.740250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:14.333 [2024-11-18 03:28:17.740258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.496 ms 00:28:14.333 [2024-11-18 03:28:17.740265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.740470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.740483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:14.333 [2024-11-18 03:28:17.740492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:28:14.333 [2024-11-18 03:28:17.740499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.741630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.741652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:14.333 [2024-11-18 03:28:17.741660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.118 ms 00:28:14.333 [2024-11-18 03:28:17.741666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.742551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.742719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:14.333 [2024-11-18 03:28:17.742731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:28:14.333 [2024-11-18 03:28:17.742738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.744298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.744333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:14.333 [2024-11-18 03:28:17.744341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:28:14.333 [2024-11-18 03:28:17.744348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.745594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.745626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:14.333 [2024-11-18 03:28:17.745635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.219 ms 00:28:14.333 [2024-11-18 03:28:17.745641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.745719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.745728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:14.333 [2024-11-18 03:28:17.745735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:28:14.333 [2024-11-18 03:28:17.745741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.746969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.746996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:14.333 [2024-11-18 03:28:17.747003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.215 ms 00:28:14.333 [2024-11-18 03:28:17.747008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.748283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.748307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:14.333 [2024-11-18 03:28:17.748326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.249 ms 00:28:14.333 [2024-11-18 03:28:17.748331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.749564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.749590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:14.333 [2024-11-18 03:28:17.749596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.207 ms 00:28:14.333 [2024-11-18 03:28:17.749601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.750693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.750716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:14.333 [2024-11-18 03:28:17.750724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.045 ms 00:28:14.333 [2024-11-18 03:28:17.750729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.750753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:14.333 [2024-11-18 03:28:17.750765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.333 [2024-11-18 03:28:17.750777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.333 [2024-11-18 03:28:17.750784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:14.333 [2024-11-18 03:28:17.750791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.333 [2024-11-18 03:28:17.750882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:14.333 [2024-11-18 03:28:17.750888] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ba78d9b1-83f2-4bbc-b9f5-228c3b2e7e48 00:28:14.333 [2024-11-18 03:28:17.750895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:14.333 [2024-11-18 03:28:17.750901] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:14.333 [2024-11-18 03:28:17.750907] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:14.333 [2024-11-18 03:28:17.750913] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:14.333 [2024-11-18 03:28:17.750919] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:14.333 [2024-11-18 03:28:17.750924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:14.333 [2024-11-18 03:28:17.750931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:14.333 [2024-11-18 03:28:17.750936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:14.333 [2024-11-18 03:28:17.750940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:14.333 [2024-11-18 03:28:17.750948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.750954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:14.333 [2024-11-18 03:28:17.750961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:28:14.333 [2024-11-18 03:28:17.750968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.752595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.752614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:14.333 [2024-11-18 03:28:17.752622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.614 ms 00:28:14.333 [2024-11-18 03:28:17.752628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.333 [2024-11-18 03:28:17.752712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.333 [2024-11-18 03:28:17.752720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:14.334 [2024-11-18 03:28:17.752733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:14.334 [2024-11-18 03:28:17.752739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.758681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.758711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:14.334 [2024-11-18 03:28:17.758719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.758725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.758748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.758754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:14.334 [2024-11-18 03:28:17.758763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.758769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.758822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.758831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:14.334 [2024-11-18 03:28:17.758837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.758843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.758859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.758865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:14.334 [2024-11-18 03:28:17.758872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.758880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.769324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.769361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:14.334 [2024-11-18 03:28:17.769370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.769376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:14.334 [2024-11-18 03:28:17.777603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:14.334 [2024-11-18 03:28:17.777679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:14.334 [2024-11-18 03:28:17.777735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:14.334 [2024-11-18 03:28:17.777814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:14.334 [2024-11-18 03:28:17.777859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:14.334 [2024-11-18 03:28:17.777918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.777961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.334 [2024-11-18 03:28:17.777970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:14.334 [2024-11-18 03:28:17.777976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.334 [2024-11-18 03:28:17.777982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.334 [2024-11-18 03:28:17.778092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 38.404 ms, result 0 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.593 Remove shared memory files 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92820 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:14.593 ************************************ 00:28:14.593 END TEST ftl_upgrade_shutdown 00:28:14.593 ************************************ 00:28:14.593 00:28:14.593 real 1m17.788s 00:28:14.593 user 1m42.439s 00:28:14.593 sys 0m20.260s 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:14.593 03:28:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.593 03:28:18 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:14.593 03:28:18 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:14.593 03:28:18 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:14.593 03:28:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.593 03:28:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:14.593 ************************************ 00:28:14.593 START TEST ftl_restore_fast 00:28:14.593 ************************************ 00:28:14.593 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:14.593 * Looking for test storage... 00:28:14.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.593 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:14.593 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:14.593 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.852 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:14.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.852 --rc genhtml_branch_coverage=1 00:28:14.852 --rc genhtml_function_coverage=1 00:28:14.852 --rc genhtml_legend=1 00:28:14.852 --rc geninfo_all_blocks=1 00:28:14.852 --rc geninfo_unexecuted_blocks=1 00:28:14.852 00:28:14.853 ' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.853 --rc genhtml_branch_coverage=1 00:28:14.853 --rc genhtml_function_coverage=1 00:28:14.853 --rc genhtml_legend=1 00:28:14.853 --rc geninfo_all_blocks=1 00:28:14.853 --rc geninfo_unexecuted_blocks=1 00:28:14.853 00:28:14.853 ' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.853 --rc genhtml_branch_coverage=1 00:28:14.853 --rc genhtml_function_coverage=1 00:28:14.853 --rc genhtml_legend=1 00:28:14.853 --rc geninfo_all_blocks=1 00:28:14.853 --rc geninfo_unexecuted_blocks=1 00:28:14.853 00:28:14.853 ' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:14.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.853 --rc genhtml_branch_coverage=1 00:28:14.853 --rc genhtml_function_coverage=1 00:28:14.853 --rc genhtml_legend=1 00:28:14.853 --rc geninfo_all_blocks=1 00:28:14.853 --rc geninfo_unexecuted_blocks=1 00:28:14.853 00:28:14.853 ' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HAqXKM0rQC 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93270 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93270 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93270 ']' 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.853 03:28:18 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:14.853 [2024-11-18 03:28:18.315777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:14.853 [2024-11-18 03:28:18.316157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93270 ] 00:28:15.115 [2024-11-18 03:28:18.466823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.115 [2024-11-18 03:28:18.538748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:15.687 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:15.954 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:16.216 { 00:28:16.216 "name": "nvme0n1", 00:28:16.216 "aliases": [ 00:28:16.216 "299f5d43-538b-44b5-82c1-54a12d5b69f1" 00:28:16.216 ], 00:28:16.216 "product_name": "NVMe disk", 00:28:16.216 "block_size": 4096, 00:28:16.216 "num_blocks": 1310720, 00:28:16.216 "uuid": "299f5d43-538b-44b5-82c1-54a12d5b69f1", 00:28:16.216 "numa_id": -1, 00:28:16.216 "assigned_rate_limits": { 00:28:16.216 "rw_ios_per_sec": 0, 00:28:16.216 "rw_mbytes_per_sec": 0, 00:28:16.216 "r_mbytes_per_sec": 0, 00:28:16.216 "w_mbytes_per_sec": 0 00:28:16.216 }, 00:28:16.216 "claimed": true, 00:28:16.216 "claim_type": "read_many_write_one", 00:28:16.216 "zoned": false, 00:28:16.216 "supported_io_types": { 00:28:16.216 "read": true, 00:28:16.216 "write": true, 00:28:16.216 "unmap": true, 00:28:16.216 "flush": true, 00:28:16.216 "reset": true, 00:28:16.216 "nvme_admin": true, 00:28:16.216 "nvme_io": true, 00:28:16.216 "nvme_io_md": false, 00:28:16.216 "write_zeroes": true, 00:28:16.216 "zcopy": false, 00:28:16.216 "get_zone_info": false, 00:28:16.216 "zone_management": false, 00:28:16.216 "zone_append": false, 00:28:16.216 "compare": true, 00:28:16.216 "compare_and_write": false, 00:28:16.216 "abort": true, 00:28:16.216 "seek_hole": false, 00:28:16.216 "seek_data": false, 00:28:16.216 "copy": true, 00:28:16.216 "nvme_iov_md": false 00:28:16.216 }, 00:28:16.216 "driver_specific": { 00:28:16.216 "nvme": [ 00:28:16.216 { 00:28:16.216 "pci_address": "0000:00:11.0", 00:28:16.216 "trid": { 00:28:16.216 "trtype": "PCIe", 00:28:16.216 "traddr": "0000:00:11.0" 00:28:16.216 }, 00:28:16.216 "ctrlr_data": { 00:28:16.216 "cntlid": 0, 00:28:16.216 "vendor_id": "0x1b36", 00:28:16.216 "model_number": "QEMU NVMe Ctrl", 00:28:16.216 "serial_number": "12341", 00:28:16.216 "firmware_revision": "8.0.0", 00:28:16.216 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:16.216 "oacs": { 00:28:16.216 "security": 0, 00:28:16.216 "format": 1, 00:28:16.216 "firmware": 0, 00:28:16.216 "ns_manage": 1 00:28:16.216 }, 00:28:16.216 "multi_ctrlr": false, 00:28:16.216 "ana_reporting": false 00:28:16.216 }, 00:28:16.216 "vs": { 00:28:16.216 "nvme_version": "1.4" 00:28:16.216 }, 00:28:16.216 "ns_data": { 00:28:16.216 "id": 1, 00:28:16.216 "can_share": false 00:28:16.216 } 00:28:16.216 } 00:28:16.216 ], 00:28:16.216 "mp_policy": "active_passive" 00:28:16.216 } 00:28:16.216 } 00:28:16.216 ]' 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.216 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:16.478 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=f000fcfb-1cf3-4205-af4a-b15fa79d35e2 00:28:16.478 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:16.478 03:28:19 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f000fcfb-1cf3-4205-af4a-b15fa79d35e2 00:28:16.737 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:16.995 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0ee00606-49ce-4180-95cc-61dea4a3a674 00:28:16.995 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0ee00606-49ce-4180-95cc-61dea4a3a674 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:17.254 { 00:28:17.254 "name": "ce910a04-b97d-4ce3-894d-8d74443d11d3", 00:28:17.254 "aliases": [ 00:28:17.254 "lvs/nvme0n1p0" 00:28:17.254 ], 00:28:17.254 "product_name": "Logical Volume", 00:28:17.254 "block_size": 4096, 00:28:17.254 "num_blocks": 26476544, 00:28:17.254 "uuid": "ce910a04-b97d-4ce3-894d-8d74443d11d3", 00:28:17.254 "assigned_rate_limits": { 00:28:17.254 "rw_ios_per_sec": 0, 00:28:17.254 "rw_mbytes_per_sec": 0, 00:28:17.254 "r_mbytes_per_sec": 0, 00:28:17.254 "w_mbytes_per_sec": 0 00:28:17.254 }, 00:28:17.254 "claimed": false, 00:28:17.254 "zoned": false, 00:28:17.254 "supported_io_types": { 00:28:17.254 "read": true, 00:28:17.254 "write": true, 00:28:17.254 "unmap": true, 00:28:17.254 "flush": false, 00:28:17.254 "reset": true, 00:28:17.254 "nvme_admin": false, 00:28:17.254 "nvme_io": false, 00:28:17.254 "nvme_io_md": false, 00:28:17.254 "write_zeroes": true, 00:28:17.254 "zcopy": false, 00:28:17.254 "get_zone_info": false, 00:28:17.254 "zone_management": false, 00:28:17.254 "zone_append": false, 00:28:17.254 "compare": false, 00:28:17.254 "compare_and_write": false, 00:28:17.254 "abort": false, 00:28:17.254 "seek_hole": true, 00:28:17.254 "seek_data": true, 00:28:17.254 "copy": false, 00:28:17.254 "nvme_iov_md": false 00:28:17.254 }, 00:28:17.254 "driver_specific": { 00:28:17.254 "lvol": { 00:28:17.254 "lvol_store_uuid": "0ee00606-49ce-4180-95cc-61dea4a3a674", 00:28:17.254 "base_bdev": "nvme0n1", 00:28:17.254 "thin_provision": true, 00:28:17.254 "num_allocated_clusters": 0, 00:28:17.254 "snapshot": false, 00:28:17.254 "clone": false, 00:28:17.254 "esnap_clone": false 00:28:17.254 } 00:28:17.254 } 00:28:17.254 } 00:28:17.254 ]' 00:28:17.254 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:17.513 03:28:20 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:17.772 { 00:28:17.772 "name": "ce910a04-b97d-4ce3-894d-8d74443d11d3", 00:28:17.772 "aliases": [ 00:28:17.772 "lvs/nvme0n1p0" 00:28:17.772 ], 00:28:17.772 "product_name": "Logical Volume", 00:28:17.772 "block_size": 4096, 00:28:17.772 "num_blocks": 26476544, 00:28:17.772 "uuid": "ce910a04-b97d-4ce3-894d-8d74443d11d3", 00:28:17.772 "assigned_rate_limits": { 00:28:17.772 "rw_ios_per_sec": 0, 00:28:17.772 "rw_mbytes_per_sec": 0, 00:28:17.772 "r_mbytes_per_sec": 0, 00:28:17.772 "w_mbytes_per_sec": 0 00:28:17.772 }, 00:28:17.772 "claimed": false, 00:28:17.772 "zoned": false, 00:28:17.772 "supported_io_types": { 00:28:17.772 "read": true, 00:28:17.772 "write": true, 00:28:17.772 "unmap": true, 00:28:17.772 "flush": false, 00:28:17.772 "reset": true, 00:28:17.772 "nvme_admin": false, 00:28:17.772 "nvme_io": false, 00:28:17.772 "nvme_io_md": false, 00:28:17.772 "write_zeroes": true, 00:28:17.772 "zcopy": false, 00:28:17.772 "get_zone_info": false, 00:28:17.772 "zone_management": false, 00:28:17.772 "zone_append": false, 00:28:17.772 "compare": false, 00:28:17.772 "compare_and_write": false, 00:28:17.772 "abort": false, 00:28:17.772 "seek_hole": true, 00:28:17.772 "seek_data": true, 00:28:17.772 "copy": false, 00:28:17.772 "nvme_iov_md": false 00:28:17.772 }, 00:28:17.772 "driver_specific": { 00:28:17.772 "lvol": { 00:28:17.772 "lvol_store_uuid": "0ee00606-49ce-4180-95cc-61dea4a3a674", 00:28:17.772 "base_bdev": "nvme0n1", 00:28:17.772 "thin_provision": true, 00:28:17.772 "num_allocated_clusters": 0, 00:28:17.772 "snapshot": false, 00:28:17.772 "clone": false, 00:28:17.772 "esnap_clone": false 00:28:17.772 } 00:28:17.772 } 00:28:17.772 } 00:28:17.772 ]' 00:28:17.772 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:18.031 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ce910a04-b97d-4ce3-894d-8d74443d11d3 00:28:18.289 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:18.289 { 00:28:18.289 "name": "ce910a04-b97d-4ce3-894d-8d74443d11d3", 00:28:18.289 "aliases": [ 00:28:18.289 "lvs/nvme0n1p0" 00:28:18.289 ], 00:28:18.289 "product_name": "Logical Volume", 00:28:18.289 "block_size": 4096, 00:28:18.289 "num_blocks": 26476544, 00:28:18.289 "uuid": "ce910a04-b97d-4ce3-894d-8d74443d11d3", 00:28:18.289 "assigned_rate_limits": { 00:28:18.289 "rw_ios_per_sec": 0, 00:28:18.289 "rw_mbytes_per_sec": 0, 00:28:18.289 "r_mbytes_per_sec": 0, 00:28:18.289 "w_mbytes_per_sec": 0 00:28:18.289 }, 00:28:18.289 "claimed": false, 00:28:18.289 "zoned": false, 00:28:18.289 "supported_io_types": { 00:28:18.289 "read": true, 00:28:18.289 "write": true, 00:28:18.289 "unmap": true, 00:28:18.289 "flush": false, 00:28:18.289 "reset": true, 00:28:18.289 "nvme_admin": false, 00:28:18.289 "nvme_io": false, 00:28:18.289 "nvme_io_md": false, 00:28:18.289 "write_zeroes": true, 00:28:18.289 "zcopy": false, 00:28:18.289 "get_zone_info": false, 00:28:18.289 "zone_management": false, 00:28:18.289 "zone_append": false, 00:28:18.289 "compare": false, 00:28:18.289 "compare_and_write": false, 00:28:18.289 "abort": false, 00:28:18.289 "seek_hole": true, 00:28:18.289 "seek_data": true, 00:28:18.289 "copy": false, 00:28:18.289 "nvme_iov_md": false 00:28:18.289 }, 00:28:18.289 "driver_specific": { 00:28:18.289 "lvol": { 00:28:18.289 "lvol_store_uuid": "0ee00606-49ce-4180-95cc-61dea4a3a674", 00:28:18.289 "base_bdev": "nvme0n1", 00:28:18.289 "thin_provision": true, 00:28:18.289 "num_allocated_clusters": 0, 00:28:18.289 "snapshot": false, 00:28:18.289 "clone": false, 00:28:18.289 "esnap_clone": false 00:28:18.289 } 00:28:18.289 } 00:28:18.289 } 00:28:18.289 ]' 00:28:18.289 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:18.289 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:18.289 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ce910a04-b97d-4ce3-894d-8d74443d11d3 --l2p_dram_limit 10' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:18.551 03:28:21 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ce910a04-b97d-4ce3-894d-8d74443d11d3 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:18.551 [2024-11-18 03:28:22.054288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.054343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:18.551 [2024-11-18 03:28:22.054355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:18.551 [2024-11-18 03:28:22.054363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.054402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.054412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:18.551 [2024-11-18 03:28:22.054421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:18.551 [2024-11-18 03:28:22.054431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.054450] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:18.551 [2024-11-18 03:28:22.054635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:18.551 [2024-11-18 03:28:22.054647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.054655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:18.551 [2024-11-18 03:28:22.054664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:28:18.551 [2024-11-18 03:28:22.054672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.054694] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3b07b35b-3d9f-4fe6-9057-425d629783dd 00:28:18.551 [2024-11-18 03:28:22.055947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.055966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:18.551 [2024-11-18 03:28:22.055975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:18.551 [2024-11-18 03:28:22.055981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.062867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.062893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:18.551 [2024-11-18 03:28:22.062903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.843 ms 00:28:18.551 [2024-11-18 03:28:22.062909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.063003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.063011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:18.551 [2024-11-18 03:28:22.063022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:18.551 [2024-11-18 03:28:22.063030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.063066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.063077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:18.551 [2024-11-18 03:28:22.063085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:18.551 [2024-11-18 03:28:22.063091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.063108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:18.551 [2024-11-18 03:28:22.064729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.064755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:18.551 [2024-11-18 03:28:22.064764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:28:18.551 [2024-11-18 03:28:22.064772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.064802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.064812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:18.551 [2024-11-18 03:28:22.064818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:18.551 [2024-11-18 03:28:22.064828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.064846] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:18.551 [2024-11-18 03:28:22.064963] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:18.551 [2024-11-18 03:28:22.064972] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:18.551 [2024-11-18 03:28:22.064983] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:18.551 [2024-11-18 03:28:22.064991] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065006] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:18.551 [2024-11-18 03:28:22.065021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:18.551 [2024-11-18 03:28:22.065027] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:18.551 [2024-11-18 03:28:22.065034] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:18.551 [2024-11-18 03:28:22.065041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.065049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:18.551 [2024-11-18 03:28:22.065057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:28:18.551 [2024-11-18 03:28:22.065065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.065130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.551 [2024-11-18 03:28:22.065139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:18.551 [2024-11-18 03:28:22.065145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:18.551 [2024-11-18 03:28:22.065152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.551 [2024-11-18 03:28:22.065225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:18.551 [2024-11-18 03:28:22.065236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:18.551 [2024-11-18 03:28:22.065242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:18.551 [2024-11-18 03:28:22.065266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:18.551 [2024-11-18 03:28:22.065285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:18.551 [2024-11-18 03:28:22.065297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:18.551 [2024-11-18 03:28:22.065305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:18.551 [2024-11-18 03:28:22.065330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:18.551 [2024-11-18 03:28:22.065342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:18.551 [2024-11-18 03:28:22.065348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:18.551 [2024-11-18 03:28:22.065355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:18.551 [2024-11-18 03:28:22.065368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:18.551 [2024-11-18 03:28:22.065386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:18.551 [2024-11-18 03:28:22.065408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:18.551 [2024-11-18 03:28:22.065414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.551 [2024-11-18 03:28:22.065421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:18.551 [2024-11-18 03:28:22.065427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:18.552 [2024-11-18 03:28:22.065434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.552 [2024-11-18 03:28:22.065440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:18.552 [2024-11-18 03:28:22.065450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:18.552 [2024-11-18 03:28:22.065457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.552 [2024-11-18 03:28:22.065466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:18.552 [2024-11-18 03:28:22.065471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:18.552 [2024-11-18 03:28:22.065479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:18.552 [2024-11-18 03:28:22.065485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:18.552 [2024-11-18 03:28:22.065493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:18.552 [2024-11-18 03:28:22.065498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:18.552 [2024-11-18 03:28:22.065505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:18.552 [2024-11-18 03:28:22.065511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:18.552 [2024-11-18 03:28:22.065518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.552 [2024-11-18 03:28:22.065525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:18.552 [2024-11-18 03:28:22.065532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:18.552 [2024-11-18 03:28:22.065537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.552 [2024-11-18 03:28:22.065545] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:18.552 [2024-11-18 03:28:22.065555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:18.552 [2024-11-18 03:28:22.065568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:18.552 [2024-11-18 03:28:22.065575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.552 [2024-11-18 03:28:22.065584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:18.552 [2024-11-18 03:28:22.065590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:18.552 [2024-11-18 03:28:22.065598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:18.552 [2024-11-18 03:28:22.065604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:18.552 [2024-11-18 03:28:22.065611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:18.552 [2024-11-18 03:28:22.065617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:18.552 [2024-11-18 03:28:22.065628] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:18.552 [2024-11-18 03:28:22.065637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:18.552 [2024-11-18 03:28:22.065654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:18.552 [2024-11-18 03:28:22.065662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:18.552 [2024-11-18 03:28:22.065668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:18.552 [2024-11-18 03:28:22.065677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:18.552 [2024-11-18 03:28:22.065683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:18.552 [2024-11-18 03:28:22.065692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:18.552 [2024-11-18 03:28:22.065699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:18.552 [2024-11-18 03:28:22.065707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:18.552 [2024-11-18 03:28:22.065715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:18.552 [2024-11-18 03:28:22.065755] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:18.552 [2024-11-18 03:28:22.065769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:18.552 [2024-11-18 03:28:22.065782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:18.552 [2024-11-18 03:28:22.065789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:18.552 [2024-11-18 03:28:22.065795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:18.552 [2024-11-18 03:28:22.065804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.552 [2024-11-18 03:28:22.065814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:18.552 [2024-11-18 03:28:22.065824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:28:18.552 [2024-11-18 03:28:22.065831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.552 [2024-11-18 03:28:22.065862] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:18.552 [2024-11-18 03:28:22.065875] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:22.820 [2024-11-18 03:28:25.690214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.690485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:22.820 [2024-11-18 03:28:25.690577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3624.329 ms 00:28:22.820 [2024-11-18 03:28:25.690619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.706105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.706330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.820 [2024-11-18 03:28:25.706474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.361 ms 00:28:22.820 [2024-11-18 03:28:25.706505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.706657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.706688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:22.820 [2024-11-18 03:28:25.706780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:22.820 [2024-11-18 03:28:25.706812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.721108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.721296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.820 [2024-11-18 03:28:25.721535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.226 ms 00:28:22.820 [2024-11-18 03:28:25.721581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.721636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.721663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.820 [2024-11-18 03:28:25.721687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:22.820 [2024-11-18 03:28:25.721709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.722466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.722635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.820 [2024-11-18 03:28:25.722828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:28:22.820 [2024-11-18 03:28:25.723022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.723174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.723189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.820 [2024-11-18 03:28:25.723207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:22.820 [2024-11-18 03:28:25.723217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.742241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.742300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.820 [2024-11-18 03:28:25.742342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.993 ms 00:28:22.820 [2024-11-18 03:28:25.742354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.754003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:22.820 [2024-11-18 03:28:25.759340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.759390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:22.820 [2024-11-18 03:28:25.759403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.864 ms 00:28:22.820 [2024-11-18 03:28:25.759415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.856159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.856224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:22.820 [2024-11-18 03:28:25.856239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.705 ms 00:28:22.820 [2024-11-18 03:28:25.856255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.820 [2024-11-18 03:28:25.856522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.820 [2024-11-18 03:28:25.856548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:22.821 [2024-11-18 03:28:25.856558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:28:22.821 [2024-11-18 03:28:25.856571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.862761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.862825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:22.821 [2024-11-18 03:28:25.862838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.150 ms 00:28:22.821 [2024-11-18 03:28:25.862850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.868235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.868465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:22.821 [2024-11-18 03:28:25.868491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.328 ms 00:28:22.821 [2024-11-18 03:28:25.868503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.868854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.868870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:22.821 [2024-11-18 03:28:25.868882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:28:22.821 [2024-11-18 03:28:25.868896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.917713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.917775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:22.821 [2024-11-18 03:28:25.917788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.790 ms 00:28:22.821 [2024-11-18 03:28:25.917801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.926196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.926259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:22.821 [2024-11-18 03:28:25.926272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.310 ms 00:28:22.821 [2024-11-18 03:28:25.926284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.932599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.932655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:22.821 [2024-11-18 03:28:25.932666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.232 ms 00:28:22.821 [2024-11-18 03:28:25.932677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.939502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.939701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:22.821 [2024-11-18 03:28:25.939721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.774 ms 00:28:22.821 [2024-11-18 03:28:25.939735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.939818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.939834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:22.821 [2024-11-18 03:28:25.939845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:22.821 [2024-11-18 03:28:25.939856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.939974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:25.939989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:22.821 [2024-11-18 03:28:25.939998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:22.821 [2024-11-18 03:28:25.940017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:25.941621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3886.704 ms, result 0 00:28:22.821 { 00:28:22.821 "name": "ftl0", 00:28:22.821 "uuid": "3b07b35b-3d9f-4fe6-9057-425d629783dd" 00:28:22.821 } 00:28:22.821 03:28:25 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:22.821 03:28:25 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:22.821 03:28:26 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:22.821 03:28:26 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:22.821 [2024-11-18 03:28:26.380521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:26.380573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:22.821 [2024-11-18 03:28:26.380590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:22.821 [2024-11-18 03:28:26.380599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:26.380634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:22.821 [2024-11-18 03:28:26.381663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:26.381723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:22.821 [2024-11-18 03:28:26.381736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:28:22.821 [2024-11-18 03:28:26.381753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:26.382020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:26.382048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:22.821 [2024-11-18 03:28:26.382058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:28:22.821 [2024-11-18 03:28:26.382069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:26.385358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:26.385389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:22.821 [2024-11-18 03:28:26.385401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:28:22.821 [2024-11-18 03:28:26.385414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.821 [2024-11-18 03:28:26.391739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.821 [2024-11-18 03:28:26.391783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:22.821 [2024-11-18 03:28:26.391797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.306 ms 00:28:22.821 [2024-11-18 03:28:26.391808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.084 [2024-11-18 03:28:26.395414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.395474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.085 [2024-11-18 03:28:26.395485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:28:23.085 [2024-11-18 03:28:26.395496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.403391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.403452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.085 [2024-11-18 03:28:26.403466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.844 ms 00:28:23.085 [2024-11-18 03:28:26.403479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.403621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.403644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:23.085 [2024-11-18 03:28:26.403656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:23.085 [2024-11-18 03:28:26.403670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.406902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.407130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:23.085 [2024-11-18 03:28:26.407150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:28:23.085 [2024-11-18 03:28:26.407161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.410189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.410411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:23.085 [2024-11-18 03:28:26.410433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:28:23.085 [2024-11-18 03:28:26.410445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.412946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.413006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:23.085 [2024-11-18 03:28:26.413017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:28:23.085 [2024-11-18 03:28:26.413033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.415562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.085 [2024-11-18 03:28:26.415618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:23.085 [2024-11-18 03:28:26.415628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:28:23.085 [2024-11-18 03:28:26.415638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.085 [2024-11-18 03:28:26.415686] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.085 [2024-11-18 03:28:26.415706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.415996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.085 [2024-11-18 03:28:26.416341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.086 [2024-11-18 03:28:26.416727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.086 [2024-11-18 03:28:26.416736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b07b35b-3d9f-4fe6-9057-425d629783dd 00:28:23.086 [2024-11-18 03:28:26.416748] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:23.086 [2024-11-18 03:28:26.416757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:23.086 [2024-11-18 03:28:26.416767] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:23.086 [2024-11-18 03:28:26.416775] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:23.086 [2024-11-18 03:28:26.416786] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.086 [2024-11-18 03:28:26.416796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.086 [2024-11-18 03:28:26.416806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.086 [2024-11-18 03:28:26.416813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.086 [2024-11-18 03:28:26.416824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.086 [2024-11-18 03:28:26.416832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.086 [2024-11-18 03:28:26.416849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.086 [2024-11-18 03:28:26.416862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:28:23.086 [2024-11-18 03:28:26.416874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.419409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.086 [2024-11-18 03:28:26.419572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.086 [2024-11-18 03:28:26.419632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:28:23.086 [2024-11-18 03:28:26.419667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.419797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.086 [2024-11-18 03:28:26.419918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.086 [2024-11-18 03:28:26.419997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:23.086 [2024-11-18 03:28:26.420026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.431262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.431475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.086 [2024-11-18 03:28:26.431544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.431575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.431677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.431704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.086 [2024-11-18 03:28:26.431727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.431750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.431885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.432059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.086 [2024-11-18 03:28:26.432084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.432107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.432145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.432174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.086 [2024-11-18 03:28:26.432344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.432376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.452228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.452474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.086 [2024-11-18 03:28:26.452548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.452578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.469231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.469589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.086 [2024-11-18 03:28:26.469727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.469762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.469890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.469971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.086 [2024-11-18 03:28:26.469984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.469997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.470055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.470071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.086 [2024-11-18 03:28:26.470084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.470095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.470193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.470207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.086 [2024-11-18 03:28:26.470219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.470232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.470271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.470286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.086 [2024-11-18 03:28:26.470296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.086 [2024-11-18 03:28:26.470351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.086 [2024-11-18 03:28:26.470407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.086 [2024-11-18 03:28:26.470423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.086 [2024-11-18 03:28:26.470433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.087 [2024-11-18 03:28:26.470444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.087 [2024-11-18 03:28:26.470510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.087 [2024-11-18 03:28:26.470525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.087 [2024-11-18 03:28:26.470536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.087 [2024-11-18 03:28:26.470547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.087 [2024-11-18 03:28:26.470750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 90.172 ms, result 0 00:28:23.087 true 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93270 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93270 ']' 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93270 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93270 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93270' 00:28:23.087 killing process with pid 93270 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93270 00:28:23.087 03:28:26 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93270 00:28:28.369 03:28:31 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:32.572 262144+0 records in 00:28:32.572 262144+0 records out 00:28:32.572 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.40094 s, 244 MB/s 00:28:32.572 03:28:35 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:34.556 03:28:37 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:34.556 [2024-11-18 03:28:37.854298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:34.556 [2024-11-18 03:28:37.854398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93484 ] 00:28:34.556 [2024-11-18 03:28:37.996376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.556 [2024-11-18 03:28:38.045301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.818 [2024-11-18 03:28:38.158211] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:34.818 [2024-11-18 03:28:38.158283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:34.818 [2024-11-18 03:28:38.318024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.318242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:34.818 [2024-11-18 03:28:38.318275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:34.818 [2024-11-18 03:28:38.318288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.318367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.318385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:34.818 [2024-11-18 03:28:38.318394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:34.818 [2024-11-18 03:28:38.318402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.318428] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:34.818 [2024-11-18 03:28:38.318684] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:34.818 [2024-11-18 03:28:38.318704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.318713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:34.818 [2024-11-18 03:28:38.318728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:28:34.818 [2024-11-18 03:28:38.318739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.320270] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:34.818 [2024-11-18 03:28:38.323296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.323346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:34.818 [2024-11-18 03:28:38.323357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:28:34.818 [2024-11-18 03:28:38.323373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.323442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.323452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:34.818 [2024-11-18 03:28:38.323461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:34.818 [2024-11-18 03:28:38.323471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.330991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.331023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:34.818 [2024-11-18 03:28:38.331033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.470 ms 00:28:34.818 [2024-11-18 03:28:38.331044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.331145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.331158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:34.818 [2024-11-18 03:28:38.331178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:34.818 [2024-11-18 03:28:38.331195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.331247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.818 [2024-11-18 03:28:38.331271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:34.818 [2024-11-18 03:28:38.331287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:34.818 [2024-11-18 03:28:38.331301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.818 [2024-11-18 03:28:38.331349] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:34.819 [2024-11-18 03:28:38.333204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.819 [2024-11-18 03:28:38.333410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:34.819 [2024-11-18 03:28:38.333427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:28:34.819 [2024-11-18 03:28:38.333435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.819 [2024-11-18 03:28:38.333472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.819 [2024-11-18 03:28:38.333482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:34.819 [2024-11-18 03:28:38.333491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:34.819 [2024-11-18 03:28:38.333500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.819 [2024-11-18 03:28:38.333532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:34.819 [2024-11-18 03:28:38.333560] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:34.819 [2024-11-18 03:28:38.333599] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:34.819 [2024-11-18 03:28:38.333615] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:34.819 [2024-11-18 03:28:38.333722] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:34.819 [2024-11-18 03:28:38.333734] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:34.819 [2024-11-18 03:28:38.333745] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:34.819 [2024-11-18 03:28:38.333755] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:34.819 [2024-11-18 03:28:38.333768] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:34.819 [2024-11-18 03:28:38.333777] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:34.819 [2024-11-18 03:28:38.333789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:34.819 [2024-11-18 03:28:38.333797] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:34.819 [2024-11-18 03:28:38.333806] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:34.819 [2024-11-18 03:28:38.333814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.819 [2024-11-18 03:28:38.333822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:34.819 [2024-11-18 03:28:38.333834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:28:34.819 [2024-11-18 03:28:38.333841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.819 [2024-11-18 03:28:38.333926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.819 [2024-11-18 03:28:38.333942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:34.819 [2024-11-18 03:28:38.333951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:34.819 [2024-11-18 03:28:38.333963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.819 [2024-11-18 03:28:38.334066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:34.819 [2024-11-18 03:28:38.334077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:34.819 [2024-11-18 03:28:38.334091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:34.819 [2024-11-18 03:28:38.334123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:34.819 [2024-11-18 03:28:38.334149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:34.819 [2024-11-18 03:28:38.334165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:34.819 [2024-11-18 03:28:38.334172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:34.819 [2024-11-18 03:28:38.334185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:34.819 [2024-11-18 03:28:38.334195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:34.819 [2024-11-18 03:28:38.334203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:34.819 [2024-11-18 03:28:38.334210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:34.819 [2024-11-18 03:28:38.334226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:34.819 [2024-11-18 03:28:38.334250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:34.819 [2024-11-18 03:28:38.334277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:34.819 [2024-11-18 03:28:38.334299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:34.819 [2024-11-18 03:28:38.334342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:34.819 [2024-11-18 03:28:38.334364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:34.819 [2024-11-18 03:28:38.334377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:34.819 [2024-11-18 03:28:38.334384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:34.819 [2024-11-18 03:28:38.334391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:34.819 [2024-11-18 03:28:38.334397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:34.819 [2024-11-18 03:28:38.334404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:34.819 [2024-11-18 03:28:38.334411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:34.819 [2024-11-18 03:28:38.334425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:34.819 [2024-11-18 03:28:38.334432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334439] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:34.819 [2024-11-18 03:28:38.334450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:34.819 [2024-11-18 03:28:38.334459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:34.819 [2024-11-18 03:28:38.334477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:34.819 [2024-11-18 03:28:38.334484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:34.819 [2024-11-18 03:28:38.334490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:34.819 [2024-11-18 03:28:38.334498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:34.819 [2024-11-18 03:28:38.334505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:34.819 [2024-11-18 03:28:38.334511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:34.819 [2024-11-18 03:28:38.334519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:34.819 [2024-11-18 03:28:38.334529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.819 [2024-11-18 03:28:38.334538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:34.819 [2024-11-18 03:28:38.334547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:34.819 [2024-11-18 03:28:38.334555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:34.819 [2024-11-18 03:28:38.334562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:34.819 [2024-11-18 03:28:38.334569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:34.819 [2024-11-18 03:28:38.334579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:34.819 [2024-11-18 03:28:38.334586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:34.819 [2024-11-18 03:28:38.334617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:34.819 [2024-11-18 03:28:38.334625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:34.819 [2024-11-18 03:28:38.334632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:34.819 [2024-11-18 03:28:38.334640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:34.819 [2024-11-18 03:28:38.334647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:34.819 [2024-11-18 03:28:38.334654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:34.819 [2024-11-18 03:28:38.334662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:34.819 [2024-11-18 03:28:38.334669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:34.820 [2024-11-18 03:28:38.334677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.820 [2024-11-18 03:28:38.334686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:34.820 [2024-11-18 03:28:38.334695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:34.820 [2024-11-18 03:28:38.334703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:34.820 [2024-11-18 03:28:38.334711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:34.820 [2024-11-18 03:28:38.334719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.334729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:34.820 [2024-11-18 03:28:38.334739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:28:34.820 [2024-11-18 03:28:38.334747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.358436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.358484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.820 [2024-11-18 03:28:38.358504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.644 ms 00:28:34.820 [2024-11-18 03:28:38.358514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.358637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.358651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:34.820 [2024-11-18 03:28:38.358660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:28:34.820 [2024-11-18 03:28:38.358668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.371683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.371724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.820 [2024-11-18 03:28:38.371735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:28:34.820 [2024-11-18 03:28:38.371743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.371778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.371787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.820 [2024-11-18 03:28:38.371796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:34.820 [2024-11-18 03:28:38.371804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.372373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.372405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.820 [2024-11-18 03:28:38.372417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:28:34.820 [2024-11-18 03:28:38.372426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.372574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.372759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.820 [2024-11-18 03:28:38.372780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:28:34.820 [2024-11-18 03:28:38.372788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.380445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.380484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.820 [2024-11-18 03:28:38.380502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.624 ms 00:28:34.820 [2024-11-18 03:28:38.380510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.820 [2024-11-18 03:28:38.384016] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:34.820 [2024-11-18 03:28:38.384191] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:34.820 [2024-11-18 03:28:38.384216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.820 [2024-11-18 03:28:38.384224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:34.820 [2024-11-18 03:28:38.384234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.609 ms 00:28:34.820 [2024-11-18 03:28:38.384242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.399903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.399952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:35.081 [2024-11-18 03:28:38.399964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.618 ms 00:28:35.081 [2024-11-18 03:28:38.399976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.402454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.402633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:35.081 [2024-11-18 03:28:38.402650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.428 ms 00:28:35.081 [2024-11-18 03:28:38.402658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.404833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.404876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:35.081 [2024-11-18 03:28:38.404887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:28:35.081 [2024-11-18 03:28:38.404895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.405261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.405276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.081 [2024-11-18 03:28:38.405287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:28:35.081 [2024-11-18 03:28:38.405296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.432076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.432133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:35.081 [2024-11-18 03:28:38.432150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.735 ms 00:28:35.081 [2024-11-18 03:28:38.432159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.440665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:35.081 [2024-11-18 03:28:38.443784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.443956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:35.081 [2024-11-18 03:28:38.443976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.570 ms 00:28:35.081 [2024-11-18 03:28:38.443995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.444075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.444087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:35.081 [2024-11-18 03:28:38.444097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:35.081 [2024-11-18 03:28:38.444106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.444179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.444191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:35.081 [2024-11-18 03:28:38.444200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:35.081 [2024-11-18 03:28:38.444208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.444236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.444247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.081 [2024-11-18 03:28:38.444257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:35.081 [2024-11-18 03:28:38.444271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.444331] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:35.081 [2024-11-18 03:28:38.444348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.444362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:35.081 [2024-11-18 03:28:38.444378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:35.081 [2024-11-18 03:28:38.444386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.081 [2024-11-18 03:28:38.450307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.081 [2024-11-18 03:28:38.450380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.081 [2024-11-18 03:28:38.450391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.900 ms 00:28:35.081 [2024-11-18 03:28:38.450399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.082 [2024-11-18 03:28:38.450485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.082 [2024-11-18 03:28:38.450496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.082 [2024-11-18 03:28:38.450505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:35.082 [2024-11-18 03:28:38.450513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.082 [2024-11-18 03:28:38.451779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.210 ms, result 0 00:28:36.025  [2024-11-18T03:28:40.541Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-18T03:28:41.484Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-18T03:28:42.866Z] Copying: 57/1024 [MB] (15 MBps) [2024-11-18T03:28:43.800Z] Copying: 73/1024 [MB] (16 MBps) [2024-11-18T03:28:44.736Z] Copying: 95/1024 [MB] (22 MBps) [2024-11-18T03:28:45.674Z] Copying: 119/1024 [MB] (23 MBps) [2024-11-18T03:28:46.615Z] Copying: 143/1024 [MB] (23 MBps) [2024-11-18T03:28:47.556Z] Copying: 157/1024 [MB] (13 MBps) [2024-11-18T03:28:48.498Z] Copying: 178/1024 [MB] (21 MBps) [2024-11-18T03:28:49.871Z] Copying: 193/1024 [MB] (14 MBps) [2024-11-18T03:28:50.805Z] Copying: 205/1024 [MB] (12 MBps) [2024-11-18T03:28:51.744Z] Copying: 226/1024 [MB] (20 MBps) [2024-11-18T03:28:52.681Z] Copying: 241/1024 [MB] (14 MBps) [2024-11-18T03:28:53.617Z] Copying: 261/1024 [MB] (20 MBps) [2024-11-18T03:28:54.554Z] Copying: 284/1024 [MB] (22 MBps) [2024-11-18T03:28:55.488Z] Copying: 297/1024 [MB] (12 MBps) [2024-11-18T03:28:56.861Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-18T03:28:57.794Z] Copying: 319/1024 [MB] (11 MBps) [2024-11-18T03:28:58.728Z] Copying: 330/1024 [MB] (11 MBps) [2024-11-18T03:28:59.667Z] Copying: 342/1024 [MB] (11 MBps) [2024-11-18T03:29:00.601Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-18T03:29:01.611Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-18T03:29:02.546Z] Copying: 375/1024 [MB] (11 MBps) [2024-11-18T03:29:03.486Z] Copying: 386/1024 [MB] (11 MBps) [2024-11-18T03:29:04.863Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-18T03:29:05.797Z] Copying: 409/1024 [MB] (11 MBps) [2024-11-18T03:29:06.732Z] Copying: 420/1024 [MB] (11 MBps) [2024-11-18T03:29:07.673Z] Copying: 432/1024 [MB] (11 MBps) [2024-11-18T03:29:08.608Z] Copying: 443/1024 [MB] (11 MBps) [2024-11-18T03:29:09.543Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-18T03:29:10.500Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-18T03:29:11.875Z] Copying: 478/1024 [MB] (11 MBps) [2024-11-18T03:29:12.811Z] Copying: 490/1024 [MB] (11 MBps) [2024-11-18T03:29:13.746Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-18T03:29:14.681Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-18T03:29:15.618Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-18T03:29:16.553Z] Copying: 537/1024 [MB] (13 MBps) [2024-11-18T03:29:17.488Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-18T03:29:18.875Z] Copying: 560/1024 [MB] (11 MBps) [2024-11-18T03:29:19.814Z] Copying: 572/1024 [MB] (11 MBps) [2024-11-18T03:29:20.749Z] Copying: 595904/1048576 [kB] (10152 kBps) [2024-11-18T03:29:21.684Z] Copying: 593/1024 [MB] (11 MBps) [2024-11-18T03:29:22.621Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-18T03:29:23.559Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-18T03:29:24.494Z] Copying: 626/1024 [MB] (10 MBps) [2024-11-18T03:29:25.872Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-18T03:29:26.807Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-18T03:29:27.742Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-18T03:29:28.676Z] Copying: 672/1024 [MB] (13 MBps) [2024-11-18T03:29:29.611Z] Copying: 686/1024 [MB] (13 MBps) [2024-11-18T03:29:30.545Z] Copying: 697/1024 [MB] (11 MBps) [2024-11-18T03:29:31.481Z] Copying: 709/1024 [MB] (11 MBps) [2024-11-18T03:29:32.491Z] Copying: 720/1024 [MB] (11 MBps) [2024-11-18T03:29:33.905Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-18T03:29:34.476Z] Copying: 743/1024 [MB] (11 MBps) [2024-11-18T03:29:35.857Z] Copying: 755/1024 [MB] (11 MBps) [2024-11-18T03:29:36.797Z] Copying: 766/1024 [MB] (11 MBps) [2024-11-18T03:29:37.739Z] Copying: 777/1024 [MB] (10 MBps) [2024-11-18T03:29:38.678Z] Copying: 787/1024 [MB] (10 MBps) [2024-11-18T03:29:39.620Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-18T03:29:40.562Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-18T03:29:41.505Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-18T03:29:42.892Z] Copying: 849480/1048576 [kB] (10172 kBps) [2024-11-18T03:29:43.834Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-18T03:29:44.776Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-18T03:29:45.717Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-18T03:29:46.657Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-18T03:29:47.597Z] Copying: 882/1024 [MB] (10 MBps) [2024-11-18T03:29:48.540Z] Copying: 907/1024 [MB] (24 MBps) [2024-11-18T03:29:49.480Z] Copying: 932/1024 [MB] (25 MBps) [2024-11-18T03:29:50.858Z] Copying: 949/1024 [MB] (17 MBps) [2024-11-18T03:29:51.797Z] Copying: 969/1024 [MB] (20 MBps) [2024-11-18T03:29:52.737Z] Copying: 981/1024 [MB] (11 MBps) [2024-11-18T03:29:53.676Z] Copying: 994/1024 [MB] (13 MBps) [2024-11-18T03:29:54.618Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-18T03:29:55.559Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-18T03:29:55.559Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-18 03:29:55.262847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.982 [2024-11-18 03:29:55.262892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.982 [2024-11-18 03:29:55.262906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.982 [2024-11-18 03:29:55.262914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.982 [2024-11-18 03:29:55.262944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.982 [2024-11-18 03:29:55.263424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.982 [2024-11-18 03:29:55.263441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.982 [2024-11-18 03:29:55.263451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:29:51.982 [2024-11-18 03:29:55.263459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.982 [2024-11-18 03:29:55.265874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.982 [2024-11-18 03:29:55.265904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.982 [2024-11-18 03:29:55.265914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.396 ms 00:29:51.982 [2024-11-18 03:29:55.265921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.982 [2024-11-18 03:29:55.265951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.982 [2024-11-18 03:29:55.265962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:51.982 [2024-11-18 03:29:55.265969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:51.982 [2024-11-18 03:29:55.265977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.982 [2024-11-18 03:29:55.266019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.982 [2024-11-18 03:29:55.266027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:51.982 [2024-11-18 03:29:55.266034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:51.982 [2024-11-18 03:29:55.266042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.982 [2024-11-18 03:29:55.266054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.982 [2024-11-18 03:29:55.266065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.982 [2024-11-18 03:29:55.266157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.983 [2024-11-18 03:29:55.266835] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.984 [2024-11-18 03:29:55.266847] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b07b35b-3d9f-4fe6-9057-425d629783dd 00:29:51.984 [2024-11-18 03:29:55.266855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:51.984 [2024-11-18 03:29:55.266862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:51.984 [2024-11-18 03:29:55.266868] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.984 [2024-11-18 03:29:55.266875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.984 [2024-11-18 03:29:55.266882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.984 [2024-11-18 03:29:55.266889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.984 [2024-11-18 03:29:55.266896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.984 [2024-11-18 03:29:55.266902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.984 [2024-11-18 03:29:55.266908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.984 [2024-11-18 03:29:55.266915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.984 [2024-11-18 03:29:55.266922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.984 [2024-11-18 03:29:55.266929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:29:51.984 [2024-11-18 03:29:55.266936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.268346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.984 [2024-11-18 03:29:55.268460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.984 [2024-11-18 03:29:55.268475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:29:51.984 [2024-11-18 03:29:55.268483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.268561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.984 [2024-11-18 03:29:55.268569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.984 [2024-11-18 03:29:55.268577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:51.984 [2024-11-18 03:29:55.268587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.273062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.273164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.984 [2024-11-18 03:29:55.273214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.273236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.273300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.273336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.984 [2024-11-18 03:29:55.273356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.273379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.273425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.273488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.984 [2024-11-18 03:29:55.273511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.273530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.273555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.273576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.984 [2024-11-18 03:29:55.273594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.273611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.282284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.282429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.984 [2024-11-18 03:29:55.282479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.282503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.289516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.289642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.984 [2024-11-18 03:29:55.289688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.289715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.289768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.289790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.984 [2024-11-18 03:29:55.289809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.289827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.289861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.289917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.984 [2024-11-18 03:29:55.289940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.289958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.290009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.290019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.984 [2024-11-18 03:29:55.290026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.290034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.290061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.290070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:51.984 [2024-11-18 03:29:55.290078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.290086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.290123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.290133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.984 [2024-11-18 03:29:55.290141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.290148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.290188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.984 [2024-11-18 03:29:55.290197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.984 [2024-11-18 03:29:55.290205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.984 [2024-11-18 03:29:55.290212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.984 [2024-11-18 03:29:55.290348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.449 ms, result 0 00:29:52.244 00:29:52.244 00:29:52.244 03:29:55 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:52.244 [2024-11-18 03:29:55.742773] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:52.244 [2024-11-18 03:29:55.743896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94268 ] 00:29:52.504 [2024-11-18 03:29:55.902854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.504 [2024-11-18 03:29:55.955279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.504 [2024-11-18 03:29:56.069044] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:52.504 [2024-11-18 03:29:56.069131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:52.765 [2024-11-18 03:29:56.230129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.230374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:52.765 [2024-11-18 03:29:56.230406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.765 [2024-11-18 03:29:56.230420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.230487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.230500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.765 [2024-11-18 03:29:56.230509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:52.765 [2024-11-18 03:29:56.230517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.230540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:52.765 [2024-11-18 03:29:56.230850] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:52.765 [2024-11-18 03:29:56.230869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.230878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.765 [2024-11-18 03:29:56.230892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:29:52.765 [2024-11-18 03:29:56.230904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.231228] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:52.765 [2024-11-18 03:29:56.231256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.231270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:52.765 [2024-11-18 03:29:56.231281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:52.765 [2024-11-18 03:29:56.231292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.231370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.231384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:52.765 [2024-11-18 03:29:56.231393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:52.765 [2024-11-18 03:29:56.231401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.231648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.231659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.765 [2024-11-18 03:29:56.231668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:29:52.765 [2024-11-18 03:29:56.231676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.231756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.231771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.765 [2024-11-18 03:29:56.231779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:52.765 [2024-11-18 03:29:56.231787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.231809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.231817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:52.765 [2024-11-18 03:29:56.231832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:52.765 [2024-11-18 03:29:56.231841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.231866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:52.765 [2024-11-18 03:29:56.233942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.234109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.765 [2024-11-18 03:29:56.234132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:29:52.765 [2024-11-18 03:29:56.234141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.234177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.765 [2024-11-18 03:29:56.234187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:52.765 [2024-11-18 03:29:56.234196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:52.765 [2024-11-18 03:29:56.234205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.765 [2024-11-18 03:29:56.234259] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:52.765 [2024-11-18 03:29:56.234284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:52.765 [2024-11-18 03:29:56.234353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:52.765 [2024-11-18 03:29:56.234372] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:52.765 [2024-11-18 03:29:56.234483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:52.765 [2024-11-18 03:29:56.234493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:52.765 [2024-11-18 03:29:56.234504] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:52.765 [2024-11-18 03:29:56.234515] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:52.766 [2024-11-18 03:29:56.234524] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:52.766 [2024-11-18 03:29:56.234536] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:52.766 [2024-11-18 03:29:56.234546] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:52.766 [2024-11-18 03:29:56.234554] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:52.766 [2024-11-18 03:29:56.234561] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:52.766 [2024-11-18 03:29:56.234569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.766 [2024-11-18 03:29:56.234576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:52.766 [2024-11-18 03:29:56.234585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:29:52.766 [2024-11-18 03:29:56.234592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.766 [2024-11-18 03:29:56.234708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.766 [2024-11-18 03:29:56.234719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:52.766 [2024-11-18 03:29:56.234727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:52.766 [2024-11-18 03:29:56.234738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.766 [2024-11-18 03:29:56.234836] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:52.766 [2024-11-18 03:29:56.234847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:52.766 [2024-11-18 03:29:56.234855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:52.766 [2024-11-18 03:29:56.234866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.234874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:52.766 [2024-11-18 03:29:56.234889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.234896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:52.766 [2024-11-18 03:29:56.234903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:52.766 [2024-11-18 03:29:56.234911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:52.766 [2024-11-18 03:29:56.234918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:52.766 [2024-11-18 03:29:56.234929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:52.766 [2024-11-18 03:29:56.234936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:52.766 [2024-11-18 03:29:56.234944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:52.766 [2024-11-18 03:29:56.234951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:52.766 [2024-11-18 03:29:56.234958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:52.766 [2024-11-18 03:29:56.234965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.234972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:52.766 [2024-11-18 03:29:56.234979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:52.766 [2024-11-18 03:29:56.234986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.234996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:52.766 [2024-11-18 03:29:56.235003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.766 [2024-11-18 03:29:56.235016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:52.766 [2024-11-18 03:29:56.235022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.766 [2024-11-18 03:29:56.235035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:52.766 [2024-11-18 03:29:56.235041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.766 [2024-11-18 03:29:56.235054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:52.766 [2024-11-18 03:29:56.235061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.766 [2024-11-18 03:29:56.235075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:52.766 [2024-11-18 03:29:56.235082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:52.766 [2024-11-18 03:29:56.235095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:52.766 [2024-11-18 03:29:56.235107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:52.766 [2024-11-18 03:29:56.235114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:52.766 [2024-11-18 03:29:56.235121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:52.766 [2024-11-18 03:29:56.235128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:52.766 [2024-11-18 03:29:56.235134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:52.766 [2024-11-18 03:29:56.235148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:52.766 [2024-11-18 03:29:56.235156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235163] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:52.766 [2024-11-18 03:29:56.235177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:52.766 [2024-11-18 03:29:56.235185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:52.766 [2024-11-18 03:29:56.235193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.766 [2024-11-18 03:29:56.235201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:52.766 [2024-11-18 03:29:56.235209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:52.766 [2024-11-18 03:29:56.235216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:52.766 [2024-11-18 03:29:56.235223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:52.766 [2024-11-18 03:29:56.235232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:52.766 [2024-11-18 03:29:56.235239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:52.766 [2024-11-18 03:29:56.235247] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:52.766 [2024-11-18 03:29:56.235259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:52.766 [2024-11-18 03:29:56.235275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:52.766 [2024-11-18 03:29:56.235283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:52.766 [2024-11-18 03:29:56.235289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:52.766 [2024-11-18 03:29:56.235296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:52.766 [2024-11-18 03:29:56.235304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:52.766 [2024-11-18 03:29:56.235324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:52.766 [2024-11-18 03:29:56.235332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:52.766 [2024-11-18 03:29:56.235339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:52.766 [2024-11-18 03:29:56.235346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:52.766 [2024-11-18 03:29:56.235385] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:52.766 [2024-11-18 03:29:56.235394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:52.766 [2024-11-18 03:29:56.235411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:52.766 [2024-11-18 03:29:56.235418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:52.766 [2024-11-18 03:29:56.235426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:52.766 [2024-11-18 03:29:56.235434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.766 [2024-11-18 03:29:56.235441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:52.766 [2024-11-18 03:29:56.235448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:29:52.766 [2024-11-18 03:29:56.235457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.766 [2024-11-18 03:29:56.254395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.766 [2024-11-18 03:29:56.254578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.766 [2024-11-18 03:29:56.254674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.894 ms 00:29:52.766 [2024-11-18 03:29:56.254700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.766 [2024-11-18 03:29:56.254807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.766 [2024-11-18 03:29:56.254831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:52.766 [2024-11-18 03:29:56.254851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:52.767 [2024-11-18 03:29:56.254871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.266591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.266763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.767 [2024-11-18 03:29:56.266841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.630 ms 00:29:52.767 [2024-11-18 03:29:56.266867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.266922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.266949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.767 [2024-11-18 03:29:56.266979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:52.767 [2024-11-18 03:29:56.267000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.267122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.267230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.767 [2024-11-18 03:29:56.267259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:52.767 [2024-11-18 03:29:56.267288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.267462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.267498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.767 [2024-11-18 03:29:56.267605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:29:52.767 [2024-11-18 03:29:56.267637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.274501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.274659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.767 [2024-11-18 03:29:56.274719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:29:52.767 [2024-11-18 03:29:56.274771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.274906] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:52.767 [2024-11-18 03:29:56.274998] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:52.767 [2024-11-18 03:29:56.275039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.275099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:52.767 [2024-11-18 03:29:56.275124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:29:52.767 [2024-11-18 03:29:56.275207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.287540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.287684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:52.767 [2024-11-18 03:29:56.287743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:29:52.767 [2024-11-18 03:29:56.287802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.287955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.288005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:52.767 [2024-11-18 03:29:56.288029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:29:52.767 [2024-11-18 03:29:56.288049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.288124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.288150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:52.767 [2024-11-18 03:29:56.288228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:52.767 [2024-11-18 03:29:56.288256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.288604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.288785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:52.767 [2024-11-18 03:29:56.288814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:29:52.767 [2024-11-18 03:29:56.288833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.288876] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:52.767 [2024-11-18 03:29:56.288908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.288928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:52.767 [2024-11-18 03:29:56.288948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:52.767 [2024-11-18 03:29:56.288976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.298296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:52.767 [2024-11-18 03:29:56.298563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.298595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:52.767 [2024-11-18 03:29:56.298791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.506 ms 00:29:52.767 [2024-11-18 03:29:56.298830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.301349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.301492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:52.767 [2024-11-18 03:29:56.301547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:29:52.767 [2024-11-18 03:29:56.301570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.301689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.301717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:52.767 [2024-11-18 03:29:56.301738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:52.767 [2024-11-18 03:29:56.301757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.301792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.301871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:52.767 [2024-11-18 03:29:56.301896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:52.767 [2024-11-18 03:29:56.301914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.301966] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:52.767 [2024-11-18 03:29:56.302066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.302090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:52.767 [2024-11-18 03:29:56.302149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:52.767 [2024-11-18 03:29:56.302171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.308138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.308303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:52.767 [2024-11-18 03:29:56.308384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.930 ms 00:29:52.767 [2024-11-18 03:29:56.308407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.308573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.767 [2024-11-18 03:29:56.308630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:52.767 [2024-11-18 03:29:56.308641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:52.767 [2024-11-18 03:29:56.308649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.767 [2024-11-18 03:29:56.310360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.782 ms, result 0 00:29:54.152  [2024-11-18T03:29:58.670Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-18T03:29:59.609Z] Copying: 39/1024 [MB] (22 MBps) [2024-11-18T03:30:00.548Z] Copying: 56/1024 [MB] (17 MBps) [2024-11-18T03:30:01.933Z] Copying: 72/1024 [MB] (15 MBps) [2024-11-18T03:30:02.505Z] Copying: 92/1024 [MB] (20 MBps) [2024-11-18T03:30:03.902Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-18T03:30:04.512Z] Copying: 124/1024 [MB] (18 MBps) [2024-11-18T03:30:05.898Z] Copying: 146/1024 [MB] (22 MBps) [2024-11-18T03:30:06.842Z] Copying: 161/1024 [MB] (14 MBps) [2024-11-18T03:30:07.785Z] Copying: 172/1024 [MB] (11 MBps) [2024-11-18T03:30:08.730Z] Copying: 191/1024 [MB] (18 MBps) [2024-11-18T03:30:09.676Z] Copying: 203/1024 [MB] (12 MBps) [2024-11-18T03:30:10.617Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-18T03:30:11.560Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-18T03:30:12.504Z] Copying: 237/1024 [MB] (11 MBps) [2024-11-18T03:30:13.891Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-18T03:30:14.835Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-18T03:30:15.782Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-18T03:30:16.727Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-18T03:30:17.670Z] Copying: 289/1024 [MB] (10 MBps) [2024-11-18T03:30:18.613Z] Copying: 304/1024 [MB] (15 MBps) [2024-11-18T03:30:19.556Z] Copying: 316/1024 [MB] (12 MBps) [2024-11-18T03:30:20.498Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-18T03:30:21.885Z] Copying: 338/1024 [MB] (11 MBps) [2024-11-18T03:30:22.829Z] Copying: 351/1024 [MB] (12 MBps) [2024-11-18T03:30:23.773Z] Copying: 363/1024 [MB] (11 MBps) [2024-11-18T03:30:24.717Z] Copying: 374/1024 [MB] (10 MBps) [2024-11-18T03:30:25.660Z] Copying: 385/1024 [MB] (11 MBps) [2024-11-18T03:30:26.603Z] Copying: 399/1024 [MB] (13 MBps) [2024-11-18T03:30:27.546Z] Copying: 418/1024 [MB] (18 MBps) [2024-11-18T03:30:28.931Z] Copying: 439/1024 [MB] (21 MBps) [2024-11-18T03:30:29.501Z] Copying: 450/1024 [MB] (11 MBps) [2024-11-18T03:30:30.887Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-18T03:30:31.833Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-18T03:30:32.777Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-18T03:30:33.721Z] Copying: 495/1024 [MB] (12 MBps) [2024-11-18T03:30:34.660Z] Copying: 511/1024 [MB] (16 MBps) [2024-11-18T03:30:35.607Z] Copying: 528/1024 [MB] (16 MBps) [2024-11-18T03:30:36.636Z] Copying: 551/1024 [MB] (23 MBps) [2024-11-18T03:30:37.576Z] Copying: 566/1024 [MB] (15 MBps) [2024-11-18T03:30:38.516Z] Copying: 586/1024 [MB] (19 MBps) [2024-11-18T03:30:39.901Z] Copying: 598/1024 [MB] (11 MBps) [2024-11-18T03:30:40.845Z] Copying: 609/1024 [MB] (11 MBps) [2024-11-18T03:30:41.791Z] Copying: 623/1024 [MB] (14 MBps) [2024-11-18T03:30:42.736Z] Copying: 639/1024 [MB] (15 MBps) [2024-11-18T03:30:43.682Z] Copying: 658/1024 [MB] (19 MBps) [2024-11-18T03:30:44.627Z] Copying: 673/1024 [MB] (14 MBps) [2024-11-18T03:30:45.571Z] Copying: 696/1024 [MB] (22 MBps) [2024-11-18T03:30:46.515Z] Copying: 715/1024 [MB] (19 MBps) [2024-11-18T03:30:47.903Z] Copying: 727/1024 [MB] (12 MBps) [2024-11-18T03:30:48.848Z] Copying: 742/1024 [MB] (14 MBps) [2024-11-18T03:30:49.789Z] Copying: 759/1024 [MB] (17 MBps) [2024-11-18T03:30:50.733Z] Copying: 779/1024 [MB] (19 MBps) [2024-11-18T03:30:51.677Z] Copying: 793/1024 [MB] (14 MBps) [2024-11-18T03:30:52.619Z] Copying: 808/1024 [MB] (14 MBps) [2024-11-18T03:30:53.563Z] Copying: 829/1024 [MB] (21 MBps) [2024-11-18T03:30:54.508Z] Copying: 850/1024 [MB] (21 MBps) [2024-11-18T03:30:55.896Z] Copying: 868/1024 [MB] (18 MBps) [2024-11-18T03:30:56.841Z] Copying: 888/1024 [MB] (19 MBps) [2024-11-18T03:30:57.786Z] Copying: 899/1024 [MB] (10 MBps) [2024-11-18T03:30:58.731Z] Copying: 912/1024 [MB] (12 MBps) [2024-11-18T03:30:59.677Z] Copying: 924/1024 [MB] (12 MBps) [2024-11-18T03:31:00.622Z] Copying: 934/1024 [MB] (10 MBps) [2024-11-18T03:31:01.566Z] Copying: 952/1024 [MB] (18 MBps) [2024-11-18T03:31:02.511Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-18T03:31:03.900Z] Copying: 984/1024 [MB] (21 MBps) [2024-11-18T03:31:04.844Z] Copying: 996/1024 [MB] (11 MBps) [2024-11-18T03:31:05.791Z] Copying: 1012/1024 [MB] (16 MBps) [2024-11-18T03:31:05.791Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-18T03:31:05.791Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 03:31:05.567756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.214 [2024-11-18 03:31:05.568046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:02.214 [2024-11-18 03:31:05.568184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:02.214 [2024-11-18 03:31:05.568224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.214 [2024-11-18 03:31:05.568294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:02.214 [2024-11-18 03:31:05.569207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.214 [2024-11-18 03:31:05.569395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:02.214 [2024-11-18 03:31:05.569423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:31:02.214 [2024-11-18 03:31:05.569436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.214 [2024-11-18 03:31:05.569749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.214 [2024-11-18 03:31:05.569764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:02.214 [2024-11-18 03:31:05.569776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:31:02.214 [2024-11-18 03:31:05.569786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.214 [2024-11-18 03:31:05.569824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.214 [2024-11-18 03:31:05.569839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:02.214 [2024-11-18 03:31:05.569855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:02.214 [2024-11-18 03:31:05.569866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.214 [2024-11-18 03:31:05.569939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.214 [2024-11-18 03:31:05.569954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:02.214 [2024-11-18 03:31:05.569966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:02.214 [2024-11-18 03:31:05.569977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.214 [2024-11-18 03:31:05.569996] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:02.214 [2024-11-18 03:31:05.570016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:02.214 [2024-11-18 03:31:05.570499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.570992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:02.215 [2024-11-18 03:31:05.571208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:02.215 [2024-11-18 03:31:05.571220] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b07b35b-3d9f-4fe6-9057-425d629783dd 00:31:02.215 [2024-11-18 03:31:05.571236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:02.215 [2024-11-18 03:31:05.571247] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:02.215 [2024-11-18 03:31:05.571257] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:02.215 [2024-11-18 03:31:05.571268] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:02.215 [2024-11-18 03:31:05.571278] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:02.215 [2024-11-18 03:31:05.571294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:02.215 [2024-11-18 03:31:05.571304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:02.215 [2024-11-18 03:31:05.571328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:02.215 [2024-11-18 03:31:05.571337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:02.215 [2024-11-18 03:31:05.571346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.215 [2024-11-18 03:31:05.571356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:02.215 [2024-11-18 03:31:05.571366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:31:02.215 [2024-11-18 03:31:05.571375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.215 [2024-11-18 03:31:05.574509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.215 [2024-11-18 03:31:05.574543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:02.215 [2024-11-18 03:31:05.574556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:31:02.215 [2024-11-18 03:31:05.574566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.215 [2024-11-18 03:31:05.574720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.215 [2024-11-18 03:31:05.574734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:02.215 [2024-11-18 03:31:05.574747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:31:02.215 [2024-11-18 03:31:05.574758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.215 [2024-11-18 03:31:05.581849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.215 [2024-11-18 03:31:05.581909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.215 [2024-11-18 03:31:05.581923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.215 [2024-11-18 03:31:05.581932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.215 [2024-11-18 03:31:05.582009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.215 [2024-11-18 03:31:05.582020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.215 [2024-11-18 03:31:05.582030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.215 [2024-11-18 03:31:05.582040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.215 [2024-11-18 03:31:05.582110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.215 [2024-11-18 03:31:05.582123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.215 [2024-11-18 03:31:05.582133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.215 [2024-11-18 03:31:05.582143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.215 [2024-11-18 03:31:05.582163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.215 [2024-11-18 03:31:05.582174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.215 [2024-11-18 03:31:05.582184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.582194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.596041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.596267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.216 [2024-11-18 03:31:05.596287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.596296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.216 [2024-11-18 03:31:05.608077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:02.216 [2024-11-18 03:31:05.608201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:02.216 [2024-11-18 03:31:05.608265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:02.216 [2024-11-18 03:31:05.608388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:02.216 [2024-11-18 03:31:05.608456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:02.216 [2024-11-18 03:31:05.608529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.216 [2024-11-18 03:31:05.608598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:02.216 [2024-11-18 03:31:05.608609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.216 [2024-11-18 03:31:05.608617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.216 [2024-11-18 03:31:05.608759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.970 ms, result 0 00:31:02.477 00:31:02.477 00:31:02.477 03:31:05 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:05.065 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:05.065 03:31:08 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:05.065 [2024-11-18 03:31:08.232365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:05.065 [2024-11-18 03:31:08.232487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94992 ] 00:31:05.065 [2024-11-18 03:31:08.376655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.065 [2024-11-18 03:31:08.418711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.065 [2024-11-18 03:31:08.527498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:05.065 [2024-11-18 03:31:08.527809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:05.327 [2024-11-18 03:31:08.689298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.689375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:05.327 [2024-11-18 03:31:08.689394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:05.327 [2024-11-18 03:31:08.689403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.689468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.689480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.327 [2024-11-18 03:31:08.689489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:05.327 [2024-11-18 03:31:08.689498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.689523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:05.327 [2024-11-18 03:31:08.689815] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:05.327 [2024-11-18 03:31:08.689836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.689848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.327 [2024-11-18 03:31:08.689861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:31:05.327 [2024-11-18 03:31:08.689875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.690217] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:05.327 [2024-11-18 03:31:08.690249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.690266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:05.327 [2024-11-18 03:31:08.690277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:05.327 [2024-11-18 03:31:08.690285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.690571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.690625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:05.327 [2024-11-18 03:31:08.690671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:31:05.327 [2024-11-18 03:31:08.690692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.691133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.691176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.327 [2024-11-18 03:31:08.691199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:31:05.327 [2024-11-18 03:31:08.691218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.691426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.327 [2024-11-18 03:31:08.691462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.327 [2024-11-18 03:31:08.691488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:31:05.327 [2024-11-18 03:31:08.691507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.327 [2024-11-18 03:31:08.691550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.328 [2024-11-18 03:31:08.691574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:05.328 [2024-11-18 03:31:08.691595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:05.328 [2024-11-18 03:31:08.691615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.328 [2024-11-18 03:31:08.691652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:05.328 [2024-11-18 03:31:08.693875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.328 [2024-11-18 03:31:08.694073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.328 [2024-11-18 03:31:08.694153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:31:05.328 [2024-11-18 03:31:08.694179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.328 [2024-11-18 03:31:08.694244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.328 [2024-11-18 03:31:08.694271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:05.328 [2024-11-18 03:31:08.694294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:05.328 [2024-11-18 03:31:08.694405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.328 [2024-11-18 03:31:08.694489] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:05.328 [2024-11-18 03:31:08.694529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:05.328 [2024-11-18 03:31:08.694599] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:05.328 [2024-11-18 03:31:08.694720] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:05.328 [2024-11-18 03:31:08.694855] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:05.328 [2024-11-18 03:31:08.694986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:05.328 [2024-11-18 03:31:08.695005] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:05.328 [2024-11-18 03:31:08.695017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:05.328 [2024-11-18 03:31:08.695054] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:05.328 [2024-11-18 03:31:08.695062] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:05.328 [2024-11-18 03:31:08.695071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:05.328 [2024-11-18 03:31:08.695083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.328 [2024-11-18 03:31:08.695096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:05.328 [2024-11-18 03:31:08.695104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:31:05.328 [2024-11-18 03:31:08.695113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.328 [2024-11-18 03:31:08.695208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.328 [2024-11-18 03:31:08.695219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:05.328 [2024-11-18 03:31:08.695231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:05.328 [2024-11-18 03:31:08.695240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.328 [2024-11-18 03:31:08.695367] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:05.328 [2024-11-18 03:31:08.695382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:05.328 [2024-11-18 03:31:08.695392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:05.328 [2024-11-18 03:31:08.695427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:05.328 [2024-11-18 03:31:08.695451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.328 [2024-11-18 03:31:08.695466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:05.328 [2024-11-18 03:31:08.695472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:05.328 [2024-11-18 03:31:08.695478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.328 [2024-11-18 03:31:08.695486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:05.328 [2024-11-18 03:31:08.695493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:05.328 [2024-11-18 03:31:08.695500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:05.328 [2024-11-18 03:31:08.695516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:05.328 [2024-11-18 03:31:08.695546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:05.328 [2024-11-18 03:31:08.695569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:05.328 [2024-11-18 03:31:08.695596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:05.328 [2024-11-18 03:31:08.695619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:05.328 [2024-11-18 03:31:08.695642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.328 [2024-11-18 03:31:08.695658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:05.328 [2024-11-18 03:31:08.695671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:05.328 [2024-11-18 03:31:08.695679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.328 [2024-11-18 03:31:08.695688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:05.328 [2024-11-18 03:31:08.695697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:05.328 [2024-11-18 03:31:08.695704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:05.328 [2024-11-18 03:31:08.695719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:05.328 [2024-11-18 03:31:08.695726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695734] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:05.328 [2024-11-18 03:31:08.695742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:05.328 [2024-11-18 03:31:08.695753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.328 [2024-11-18 03:31:08.695774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:05.328 [2024-11-18 03:31:08.695781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:05.328 [2024-11-18 03:31:08.695789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:05.328 [2024-11-18 03:31:08.695797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:05.328 [2024-11-18 03:31:08.695808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:05.328 [2024-11-18 03:31:08.695817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:05.328 [2024-11-18 03:31:08.695826] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:05.328 [2024-11-18 03:31:08.695839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.328 [2024-11-18 03:31:08.695850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:05.328 [2024-11-18 03:31:08.695859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:05.328 [2024-11-18 03:31:08.695868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:05.328 [2024-11-18 03:31:08.695876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:05.328 [2024-11-18 03:31:08.695885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:05.328 [2024-11-18 03:31:08.695893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:05.328 [2024-11-18 03:31:08.695901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:05.328 [2024-11-18 03:31:08.695910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:05.328 [2024-11-18 03:31:08.695918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:05.328 [2024-11-18 03:31:08.695925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:05.329 [2024-11-18 03:31:08.695932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:05.329 [2024-11-18 03:31:08.695938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:05.329 [2024-11-18 03:31:08.695948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:05.329 [2024-11-18 03:31:08.695955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:05.329 [2024-11-18 03:31:08.695964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:05.329 [2024-11-18 03:31:08.695974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.329 [2024-11-18 03:31:08.695987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:05.329 [2024-11-18 03:31:08.695995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:05.329 [2024-11-18 03:31:08.696002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:05.329 [2024-11-18 03:31:08.696009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:05.329 [2024-11-18 03:31:08.696017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.696024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:05.329 [2024-11-18 03:31:08.696033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:31:05.329 [2024-11-18 03:31:08.696040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.716802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.717056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:05.329 [2024-11-18 03:31:08.717083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.715 ms 00:31:05.329 [2024-11-18 03:31:08.717095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.717214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.717229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:05.329 [2024-11-18 03:31:08.717241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:31:05.329 [2024-11-18 03:31:08.717264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.729571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.729629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:05.329 [2024-11-18 03:31:08.729641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.193 ms 00:31:05.329 [2024-11-18 03:31:08.729649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.729687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.729696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:05.329 [2024-11-18 03:31:08.729705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:05.329 [2024-11-18 03:31:08.729713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.729820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.729831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:05.329 [2024-11-18 03:31:08.729844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:05.329 [2024-11-18 03:31:08.729852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.729974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.729985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:05.329 [2024-11-18 03:31:08.729994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:31:05.329 [2024-11-18 03:31:08.730006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.737265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.737340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:05.329 [2024-11-18 03:31:08.737352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.237 ms 00:31:05.329 [2024-11-18 03:31:08.737368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.737493] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:05.329 [2024-11-18 03:31:08.737508] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:05.329 [2024-11-18 03:31:08.737518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.737533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:05.329 [2024-11-18 03:31:08.737542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:05.329 [2024-11-18 03:31:08.737551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.749867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.749911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:05.329 [2024-11-18 03:31:08.749923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.296 ms 00:31:05.329 [2024-11-18 03:31:08.749931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.750068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.750079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:05.329 [2024-11-18 03:31:08.750095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:05.329 [2024-11-18 03:31:08.750107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.750160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.750170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:05.329 [2024-11-18 03:31:08.750183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:05.329 [2024-11-18 03:31:08.750190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.750540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.750556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:05.329 [2024-11-18 03:31:08.750565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:31:05.329 [2024-11-18 03:31:08.750573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.750594] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:05.329 [2024-11-18 03:31:08.750605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.750613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:05.329 [2024-11-18 03:31:08.750629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:05.329 [2024-11-18 03:31:08.750641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.760046] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:05.329 [2024-11-18 03:31:08.760382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.760400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:05.329 [2024-11-18 03:31:08.760413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.706 ms 00:31:05.329 [2024-11-18 03:31:08.760423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.762954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.762992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:05.329 [2024-11-18 03:31:08.763004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:31:05.329 [2024-11-18 03:31:08.763014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.763122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.763134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:05.329 [2024-11-18 03:31:08.763151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:05.329 [2024-11-18 03:31:08.763160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.763186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.763196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:05.329 [2024-11-18 03:31:08.763205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:05.329 [2024-11-18 03:31:08.763213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.763249] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:05.329 [2024-11-18 03:31:08.763260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.763277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:05.329 [2024-11-18 03:31:08.763286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:05.329 [2024-11-18 03:31:08.763294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.770170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.770395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:05.329 [2024-11-18 03:31:08.770476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.858 ms 00:31:05.329 [2024-11-18 03:31:08.770501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.770588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.329 [2024-11-18 03:31:08.770601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:05.329 [2024-11-18 03:31:08.770611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:05.329 [2024-11-18 03:31:08.770619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.329 [2024-11-18 03:31:08.771845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 82.113 ms, result 0 00:31:06.275  [2024-11-18T03:31:10.796Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-18T03:31:12.182Z] Copying: 29/1024 [MB] (16 MBps) [2024-11-18T03:31:13.126Z] Copying: 42/1024 [MB] (13 MBps) [2024-11-18T03:31:14.062Z] Copying: 53960/1048576 [kB] (10048 kBps) [2024-11-18T03:31:15.000Z] Copying: 67/1024 [MB] (15 MBps) [2024-11-18T03:31:15.942Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-18T03:31:16.885Z] Copying: 90/1024 [MB] (10 MBps) [2024-11-18T03:31:17.828Z] Copying: 100/1024 [MB] (10 MBps) [2024-11-18T03:31:19.215Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-18T03:31:19.786Z] Copying: 120/1024 [MB] (10 MBps) [2024-11-18T03:31:21.174Z] Copying: 136/1024 [MB] (15 MBps) [2024-11-18T03:31:22.119Z] Copying: 153/1024 [MB] (17 MBps) [2024-11-18T03:31:23.054Z] Copying: 169/1024 [MB] (15 MBps) [2024-11-18T03:31:23.989Z] Copying: 187/1024 [MB] (18 MBps) [2024-11-18T03:31:24.927Z] Copying: 212/1024 [MB] (24 MBps) [2024-11-18T03:31:25.872Z] Copying: 231/1024 [MB] (19 MBps) [2024-11-18T03:31:26.810Z] Copying: 250/1024 [MB] (19 MBps) [2024-11-18T03:31:28.186Z] Copying: 265/1024 [MB] (14 MBps) [2024-11-18T03:31:29.128Z] Copying: 287/1024 [MB] (22 MBps) [2024-11-18T03:31:30.068Z] Copying: 304/1024 [MB] (16 MBps) [2024-11-18T03:31:31.011Z] Copying: 323/1024 [MB] (19 MBps) [2024-11-18T03:31:31.952Z] Copying: 339/1024 [MB] (15 MBps) [2024-11-18T03:31:32.885Z] Copying: 354/1024 [MB] (15 MBps) [2024-11-18T03:31:33.822Z] Copying: 377/1024 [MB] (22 MBps) [2024-11-18T03:31:35.195Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-18T03:31:36.129Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-18T03:31:37.063Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-18T03:31:37.997Z] Copying: 422/1024 [MB] (11 MBps) [2024-11-18T03:31:38.931Z] Copying: 434/1024 [MB] (11 MBps) [2024-11-18T03:31:39.909Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-18T03:31:40.880Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-18T03:31:41.815Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-18T03:31:43.187Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-18T03:31:44.122Z] Copying: 490/1024 [MB] (11 MBps) [2024-11-18T03:31:45.062Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-18T03:31:45.995Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-18T03:31:46.929Z] Copying: 523/1024 [MB] (10 MBps) [2024-11-18T03:31:47.865Z] Copying: 534/1024 [MB] (11 MBps) [2024-11-18T03:31:48.803Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-18T03:31:50.177Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-18T03:31:51.109Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-18T03:31:52.041Z] Copying: 580/1024 [MB] (11 MBps) [2024-11-18T03:31:52.975Z] Copying: 591/1024 [MB] (11 MBps) [2024-11-18T03:31:53.911Z] Copying: 603/1024 [MB] (11 MBps) [2024-11-18T03:31:54.852Z] Copying: 614/1024 [MB] (11 MBps) [2024-11-18T03:31:56.227Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-18T03:31:56.793Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-18T03:31:58.167Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-18T03:31:59.101Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-18T03:32:00.039Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-18T03:32:00.973Z] Copying: 683/1024 [MB] (11 MBps) [2024-11-18T03:32:01.906Z] Copying: 694/1024 [MB] (11 MBps) [2024-11-18T03:32:02.838Z] Copying: 705/1024 [MB] (11 MBps) [2024-11-18T03:32:04.216Z] Copying: 717/1024 [MB] (11 MBps) [2024-11-18T03:32:05.151Z] Copying: 728/1024 [MB] (11 MBps) [2024-11-18T03:32:06.090Z] Copying: 739/1024 [MB] (10 MBps) [2024-11-18T03:32:07.027Z] Copying: 750/1024 [MB] (11 MBps) [2024-11-18T03:32:07.965Z] Copying: 761/1024 [MB] (10 MBps) [2024-11-18T03:32:08.899Z] Copying: 772/1024 [MB] (11 MBps) [2024-11-18T03:32:09.838Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-18T03:32:11.223Z] Copying: 795/1024 [MB] (11 MBps) [2024-11-18T03:32:11.876Z] Copying: 806/1024 [MB] (10 MBps) [2024-11-18T03:32:12.811Z] Copying: 817/1024 [MB] (11 MBps) [2024-11-18T03:32:14.188Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-18T03:32:15.131Z] Copying: 840/1024 [MB] (11 MBps) [2024-11-18T03:32:16.067Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-18T03:32:17.002Z] Copying: 862/1024 [MB] (11 MBps) [2024-11-18T03:32:17.935Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-18T03:32:18.868Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-18T03:32:19.805Z] Copying: 896/1024 [MB] (11 MBps) [2024-11-18T03:32:21.180Z] Copying: 907/1024 [MB] (11 MBps) [2024-11-18T03:32:22.116Z] Copying: 918/1024 [MB] (10 MBps) [2024-11-18T03:32:23.052Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-18T03:32:23.987Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-18T03:32:24.922Z] Copying: 952/1024 [MB] (11 MBps) [2024-11-18T03:32:25.861Z] Copying: 963/1024 [MB] (11 MBps) [2024-11-18T03:32:26.798Z] Copying: 974/1024 [MB] (10 MBps) [2024-11-18T03:32:28.172Z] Copying: 984/1024 [MB] (10 MBps) [2024-11-18T03:32:29.105Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-18T03:32:30.039Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-18T03:32:30.976Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-18T03:32:31.548Z] Copying: 1048128/1048576 [kB] (5608 kBps) [2024-11-18T03:32:31.548Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-18 03:32:31.306415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.971 [2024-11-18 03:32:31.306511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:27.971 [2024-11-18 03:32:31.306531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:27.971 [2024-11-18 03:32:31.306540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.971 [2024-11-18 03:32:31.310267] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:27.971 [2024-11-18 03:32:31.312823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.971 [2024-11-18 03:32:31.312874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:27.971 [2024-11-18 03:32:31.312887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.320 ms 00:32:27.971 [2024-11-18 03:32:31.312897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.971 [2024-11-18 03:32:31.324092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.971 [2024-11-18 03:32:31.324155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:27.971 [2024-11-18 03:32:31.324168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.872 ms 00:32:27.971 [2024-11-18 03:32:31.324177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.971 [2024-11-18 03:32:31.324206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.971 [2024-11-18 03:32:31.324216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:27.971 [2024-11-18 03:32:31.324225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:27.971 [2024-11-18 03:32:31.324240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.971 [2024-11-18 03:32:31.324306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.971 [2024-11-18 03:32:31.324344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:27.971 [2024-11-18 03:32:31.324357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:27.971 [2024-11-18 03:32:31.324366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.972 [2024-11-18 03:32:31.324381] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:27.972 [2024-11-18 03:32:31.324396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:32:27.972 [2024-11-18 03:32:31.324407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.324996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:27.972 [2024-11-18 03:32:31.325119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:27.973 [2024-11-18 03:32:31.325244] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:27.973 [2024-11-18 03:32:31.325267] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b07b35b-3d9f-4fe6-9057-425d629783dd 00:32:27.973 [2024-11-18 03:32:31.325276] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:32:27.973 [2024-11-18 03:32:31.325286] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:32:27.973 [2024-11-18 03:32:31.325295] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:32:27.973 [2024-11-18 03:32:31.325303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:27.973 [2024-11-18 03:32:31.325327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:27.973 [2024-11-18 03:32:31.325341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:27.973 [2024-11-18 03:32:31.325351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:27.973 [2024-11-18 03:32:31.325358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:27.973 [2024-11-18 03:32:31.325366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:27.973 [2024-11-18 03:32:31.325374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.973 [2024-11-18 03:32:31.325384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:27.973 [2024-11-18 03:32:31.325394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:32:27.973 [2024-11-18 03:32:31.325404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.328607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.973 [2024-11-18 03:32:31.328647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:27.973 [2024-11-18 03:32:31.328660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:32:27.973 [2024-11-18 03:32:31.328678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.328848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.973 [2024-11-18 03:32:31.328860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:27.973 [2024-11-18 03:32:31.328871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:32:27.973 [2024-11-18 03:32:31.328888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.338168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.338405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:27.973 [2024-11-18 03:32:31.338425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.338435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.338508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.338517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:27.973 [2024-11-18 03:32:31.338527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.338534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.338594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.338608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:27.973 [2024-11-18 03:32:31.338616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.338630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.338646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.338654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:27.973 [2024-11-18 03:32:31.338663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.338670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.358495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.358552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:27.973 [2024-11-18 03:32:31.358574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.358583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:27.973 [2024-11-18 03:32:31.375142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:27.973 [2024-11-18 03:32:31.375232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:27.973 [2024-11-18 03:32:31.375341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:27.973 [2024-11-18 03:32:31.375445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:27.973 [2024-11-18 03:32:31.375523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:27.973 [2024-11-18 03:32:31.375607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.973 [2024-11-18 03:32:31.375697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:27.973 [2024-11-18 03:32:31.375709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.973 [2024-11-18 03:32:31.375718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.973 [2024-11-18 03:32:31.375888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 71.105 ms, result 0 00:32:28.914 00:32:28.914 00:32:28.914 03:32:32 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:28.914 [2024-11-18 03:32:32.421179] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:28.914 [2024-11-18 03:32:32.421351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95861 ] 00:32:29.174 [2024-11-18 03:32:32.574669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.174 [2024-11-18 03:32:32.647515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.435 [2024-11-18 03:32:32.797659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:29.435 [2024-11-18 03:32:32.798037] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:29.435 [2024-11-18 03:32:32.962213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.962274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:29.435 [2024-11-18 03:32:32.962296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:29.435 [2024-11-18 03:32:32.962306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.962391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.962403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:29.435 [2024-11-18 03:32:32.962414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:29.435 [2024-11-18 03:32:32.962424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.962448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:29.435 [2024-11-18 03:32:32.962748] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:29.435 [2024-11-18 03:32:32.962778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.962789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:29.435 [2024-11-18 03:32:32.962799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:32:29.435 [2024-11-18 03:32:32.962815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.963145] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:29.435 [2024-11-18 03:32:32.963180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.963190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:29.435 [2024-11-18 03:32:32.963200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:29.435 [2024-11-18 03:32:32.963215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.963284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.963297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:29.435 [2024-11-18 03:32:32.963309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:29.435 [2024-11-18 03:32:32.963338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.963607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.963621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:29.435 [2024-11-18 03:32:32.963630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:32:29.435 [2024-11-18 03:32:32.963639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.963781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.963798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:29.435 [2024-11-18 03:32:32.963813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:29.435 [2024-11-18 03:32:32.963826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.435 [2024-11-18 03:32:32.963854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.435 [2024-11-18 03:32:32.963865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:29.436 [2024-11-18 03:32:32.963874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:29.436 [2024-11-18 03:32:32.963882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.436 [2024-11-18 03:32:32.963904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:29.436 [2024-11-18 03:32:32.967004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.436 [2024-11-18 03:32:32.967174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:29.436 [2024-11-18 03:32:32.967356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.105 ms 00:32:29.436 [2024-11-18 03:32:32.967389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.436 [2024-11-18 03:32:32.967449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.436 [2024-11-18 03:32:32.967472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:29.436 [2024-11-18 03:32:32.967503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:29.436 [2024-11-18 03:32:32.967522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.436 [2024-11-18 03:32:32.967603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:29.436 [2024-11-18 03:32:32.967648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:29.436 [2024-11-18 03:32:32.967793] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:29.436 [2024-11-18 03:32:32.967840] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:29.436 [2024-11-18 03:32:32.967973] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:29.436 [2024-11-18 03:32:32.968011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:29.436 [2024-11-18 03:32:32.968044] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:29.436 [2024-11-18 03:32:32.968080] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:29.436 [2024-11-18 03:32:32.968178] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:29.436 [2024-11-18 03:32:32.968208] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:29.436 [2024-11-18 03:32:32.968238] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:29.436 [2024-11-18 03:32:32.968264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:29.436 [2024-11-18 03:32:32.968284] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:29.436 [2024-11-18 03:32:32.968306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.436 [2024-11-18 03:32:32.968348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:29.436 [2024-11-18 03:32:32.968374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:32:29.436 [2024-11-18 03:32:32.968438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.436 [2024-11-18 03:32:32.968558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.436 [2024-11-18 03:32:32.968583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:29.436 [2024-11-18 03:32:32.968605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:32:29.436 [2024-11-18 03:32:32.968629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.436 [2024-11-18 03:32:32.968741] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:29.436 [2024-11-18 03:32:32.968767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:29.436 [2024-11-18 03:32:32.968788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:29.436 [2024-11-18 03:32:32.968808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.968890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:29.436 [2024-11-18 03:32:32.968927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.968947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:29.436 [2024-11-18 03:32:32.968965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:29.436 [2024-11-18 03:32:32.968986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:29.436 [2024-11-18 03:32:32.969027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:29.436 [2024-11-18 03:32:32.969049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:29.436 [2024-11-18 03:32:32.969067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:29.436 [2024-11-18 03:32:32.969120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:29.436 [2024-11-18 03:32:32.969142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:29.436 [2024-11-18 03:32:32.969161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:29.436 [2024-11-18 03:32:32.969236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:29.436 [2024-11-18 03:32:32.969401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:29.436 [2024-11-18 03:32:32.969461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:29.436 [2024-11-18 03:32:32.969526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:29.436 [2024-11-18 03:32:32.969582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:29.436 [2024-11-18 03:32:32.969729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:29.436 [2024-11-18 03:32:32.969748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:29.436 [2024-11-18 03:32:32.969756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:29.436 [2024-11-18 03:32:32.969763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:29.436 [2024-11-18 03:32:32.969770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:29.436 [2024-11-18 03:32:32.969777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:29.436 [2024-11-18 03:32:32.969785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:29.436 [2024-11-18 03:32:32.969803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:29.436 [2024-11-18 03:32:32.969810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969817] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:29.436 [2024-11-18 03:32:32.969826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:29.436 [2024-11-18 03:32:32.969834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:29.436 [2024-11-18 03:32:32.969853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:29.436 [2024-11-18 03:32:32.969860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:29.436 [2024-11-18 03:32:32.969867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:29.436 [2024-11-18 03:32:32.969874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:29.436 [2024-11-18 03:32:32.969881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:29.436 [2024-11-18 03:32:32.969888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:29.436 [2024-11-18 03:32:32.969899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:29.436 [2024-11-18 03:32:32.969916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:29.436 [2024-11-18 03:32:32.969933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:29.436 [2024-11-18 03:32:32.969941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:29.437 [2024-11-18 03:32:32.969952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:29.437 [2024-11-18 03:32:32.969961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:29.437 [2024-11-18 03:32:32.969968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:29.437 [2024-11-18 03:32:32.969976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:29.437 [2024-11-18 03:32:32.969984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:29.437 [2024-11-18 03:32:32.969991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:29.437 [2024-11-18 03:32:32.969999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:29.437 [2024-11-18 03:32:32.970009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:29.437 [2024-11-18 03:32:32.970019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:29.437 [2024-11-18 03:32:32.970028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:29.437 [2024-11-18 03:32:32.970036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:29.437 [2024-11-18 03:32:32.970044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:29.437 [2024-11-18 03:32:32.970052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:29.437 [2024-11-18 03:32:32.970060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:29.437 [2024-11-18 03:32:32.970076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:29.437 [2024-11-18 03:32:32.970083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:29.437 [2024-11-18 03:32:32.970094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:29.437 [2024-11-18 03:32:32.970101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:29.437 [2024-11-18 03:32:32.970111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.437 [2024-11-18 03:32:32.970119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:29.437 [2024-11-18 03:32:32.970127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:32:29.437 [2024-11-18 03:32:32.970136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.437 [2024-11-18 03:32:32.994283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.437 [2024-11-18 03:32:32.994388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:29.437 [2024-11-18 03:32:32.994419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.085 ms 00:32:29.437 [2024-11-18 03:32:32.994436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.437 [2024-11-18 03:32:32.994607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.437 [2024-11-18 03:32:32.994642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:29.437 [2024-11-18 03:32:32.994663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:32:29.437 [2024-11-18 03:32:32.994682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.011423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.011473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:29.699 [2024-11-18 03:32:33.011498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.609 ms 00:32:29.699 [2024-11-18 03:32:33.011508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.011547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.011558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:29.699 [2024-11-18 03:32:33.011575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:29.699 [2024-11-18 03:32:33.011585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.011686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.011699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:29.699 [2024-11-18 03:32:33.011716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:29.699 [2024-11-18 03:32:33.011730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.011872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.011884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:29.699 [2024-11-18 03:32:33.011892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:32:29.699 [2024-11-18 03:32:33.011901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.021652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.021697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:29.699 [2024-11-18 03:32:33.021709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.729 ms 00:32:29.699 [2024-11-18 03:32:33.021725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.021873] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:29.699 [2024-11-18 03:32:33.021888] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:29.699 [2024-11-18 03:32:33.021899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.021908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:29.699 [2024-11-18 03:32:33.021924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:29.699 [2024-11-18 03:32:33.021933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.034262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.034325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:29.699 [2024-11-18 03:32:33.034337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.305 ms 00:32:29.699 [2024-11-18 03:32:33.034345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.034485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.034497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:29.699 [2024-11-18 03:32:33.034507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:32:29.699 [2024-11-18 03:32:33.034516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.034576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.034587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:29.699 [2024-11-18 03:32:33.034596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:29.699 [2024-11-18 03:32:33.034608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.034961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.034983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:29.699 [2024-11-18 03:32:33.034992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:32:29.699 [2024-11-18 03:32:33.035000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.035018] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:29.699 [2024-11-18 03:32:33.035036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.035053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:29.699 [2024-11-18 03:32:33.035068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:29.699 [2024-11-18 03:32:33.035080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.045926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:29.699 [2024-11-18 03:32:33.046097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.046109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:29.699 [2024-11-18 03:32:33.046120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.997 ms 00:32:29.699 [2024-11-18 03:32:33.046129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.048872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.048911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:29.699 [2024-11-18 03:32:33.048922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:32:29.699 [2024-11-18 03:32:33.048930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.049012] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:29.699 [2024-11-18 03:32:33.049667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.049691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:29.699 [2024-11-18 03:32:33.049702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:32:29.699 [2024-11-18 03:32:33.049711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.049745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.049755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:29.699 [2024-11-18 03:32:33.049764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:29.699 [2024-11-18 03:32:33.049772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.049813] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:29.699 [2024-11-18 03:32:33.049826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.699 [2024-11-18 03:32:33.049835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:29.699 [2024-11-18 03:32:33.049844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:29.699 [2024-11-18 03:32:33.049852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.699 [2024-11-18 03:32:33.057123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.700 [2024-11-18 03:32:33.057371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:29.700 [2024-11-18 03:32:33.057392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.253 ms 00:32:29.700 [2024-11-18 03:32:33.057402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.700 [2024-11-18 03:32:33.057492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:29.700 [2024-11-18 03:32:33.057504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:29.700 [2024-11-18 03:32:33.057513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:29.700 [2024-11-18 03:32:33.057523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:29.700 [2024-11-18 03:32:33.058905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.183 ms, result 0 00:32:31.079  [2024-11-18T03:32:35.592Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T03:32:36.535Z] Copying: 21/1024 [MB] (11 MBps) [2024-11-18T03:32:37.473Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-18T03:32:38.410Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-18T03:32:39.351Z] Copying: 55/1024 [MB] (11 MBps) [2024-11-18T03:32:40.288Z] Copying: 66/1024 [MB] (11 MBps) [2024-11-18T03:32:41.732Z] Copying: 77/1024 [MB] (11 MBps) [2024-11-18T03:32:42.350Z] Copying: 89/1024 [MB] (11 MBps) [2024-11-18T03:32:43.287Z] Copying: 101/1024 [MB] (11 MBps) [2024-11-18T03:32:44.666Z] Copying: 112/1024 [MB] (11 MBps) [2024-11-18T03:32:45.600Z] Copying: 124/1024 [MB] (11 MBps) [2024-11-18T03:32:46.536Z] Copying: 135/1024 [MB] (11 MBps) [2024-11-18T03:32:47.480Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-18T03:32:48.421Z] Copying: 164/1024 [MB] (16 MBps) [2024-11-18T03:32:49.356Z] Copying: 174/1024 [MB] (10 MBps) [2024-11-18T03:32:50.289Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-18T03:32:51.668Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-18T03:32:52.604Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-18T03:32:53.543Z] Copying: 220/1024 [MB] (11 MBps) [2024-11-18T03:32:54.482Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-18T03:32:55.419Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-18T03:32:56.356Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-18T03:32:57.297Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-18T03:32:58.671Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-18T03:32:59.610Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-18T03:33:00.545Z] Copying: 299/1024 [MB] (11 MBps) [2024-11-18T03:33:01.484Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-18T03:33:02.419Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-18T03:33:03.356Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-18T03:33:04.291Z] Copying: 346/1024 [MB] (12 MBps) [2024-11-18T03:33:05.665Z] Copying: 358/1024 [MB] (11 MBps) [2024-11-18T03:33:06.605Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-18T03:33:07.541Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-18T03:33:08.478Z] Copying: 393/1024 [MB] (11 MBps) [2024-11-18T03:33:09.418Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-18T03:33:10.357Z] Copying: 416/1024 [MB] (11 MBps) [2024-11-18T03:33:11.292Z] Copying: 433/1024 [MB] (16 MBps) [2024-11-18T03:33:12.666Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-18T03:33:13.667Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-18T03:33:14.601Z] Copying: 468/1024 [MB] (11 MBps) [2024-11-18T03:33:15.536Z] Copying: 480/1024 [MB] (11 MBps) [2024-11-18T03:33:16.471Z] Copying: 491/1024 [MB] (11 MBps) [2024-11-18T03:33:17.406Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-18T03:33:18.344Z] Copying: 515/1024 [MB] (12 MBps) [2024-11-18T03:33:19.286Z] Copying: 527/1024 [MB] (12 MBps) [2024-11-18T03:33:20.659Z] Copying: 538/1024 [MB] (11 MBps) [2024-11-18T03:33:21.594Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-18T03:33:22.530Z] Copying: 561/1024 [MB] (11 MBps) [2024-11-18T03:33:23.474Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-18T03:33:24.408Z] Copying: 589/1024 [MB] (15 MBps) [2024-11-18T03:33:25.345Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-18T03:33:26.287Z] Copying: 612/1024 [MB] (11 MBps) [2024-11-18T03:33:27.673Z] Copying: 625/1024 [MB] (12 MBps) [2024-11-18T03:33:28.627Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-18T03:33:29.568Z] Copying: 647/1024 [MB] (10 MBps) [2024-11-18T03:33:30.513Z] Copying: 659/1024 [MB] (12 MBps) [2024-11-18T03:33:31.455Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-18T03:33:32.398Z] Copying: 680/1024 [MB] (10 MBps) [2024-11-18T03:33:33.337Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-18T03:33:34.277Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-18T03:33:35.653Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-18T03:33:36.591Z] Copying: 725/1024 [MB] (12 MBps) [2024-11-18T03:33:37.531Z] Copying: 736/1024 [MB] (11 MBps) [2024-11-18T03:33:38.468Z] Copying: 747/1024 [MB] (11 MBps) [2024-11-18T03:33:39.412Z] Copying: 758/1024 [MB] (11 MBps) [2024-11-18T03:33:40.348Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-18T03:33:41.316Z] Copying: 781/1024 [MB] (11 MBps) [2024-11-18T03:33:42.700Z] Copying: 792/1024 [MB] (11 MBps) [2024-11-18T03:33:43.274Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-18T03:33:44.652Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-18T03:33:45.587Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-18T03:33:46.523Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-18T03:33:47.458Z] Copying: 850/1024 [MB] (11 MBps) [2024-11-18T03:33:48.393Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-18T03:33:49.329Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-18T03:33:50.268Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-18T03:33:51.642Z] Copying: 901/1024 [MB] (16 MBps) [2024-11-18T03:33:52.583Z] Copying: 913/1024 [MB] (12 MBps) [2024-11-18T03:33:53.519Z] Copying: 925/1024 [MB] (12 MBps) [2024-11-18T03:33:54.455Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-18T03:33:55.394Z] Copying: 948/1024 [MB] (11 MBps) [2024-11-18T03:33:56.328Z] Copying: 959/1024 [MB] (10 MBps) [2024-11-18T03:33:57.703Z] Copying: 970/1024 [MB] (11 MBps) [2024-11-18T03:33:58.269Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-18T03:33:59.642Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-18T03:34:00.577Z] Copying: 1006/1024 [MB] (12 MBps) [2024-11-18T03:34:00.836Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-18T03:34:01.098Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-18 03:34:00.961116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.521 [2024-11-18 03:34:00.961448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:57.521 [2024-11-18 03:34:00.961482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:57.521 [2024-11-18 03:34:00.961496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.521 [2024-11-18 03:34:00.961537] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:57.521 [2024-11-18 03:34:00.964084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.521 [2024-11-18 03:34:00.964124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:57.521 [2024-11-18 03:34:00.964139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:33:57.521 [2024-11-18 03:34:00.964152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.521 [2024-11-18 03:34:00.964508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.521 [2024-11-18 03:34:00.964529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:57.521 [2024-11-18 03:34:00.964551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:33:57.521 [2024-11-18 03:34:00.964562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.521 [2024-11-18 03:34:00.964603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.521 [2024-11-18 03:34:00.964617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:57.521 [2024-11-18 03:34:00.964628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:57.521 [2024-11-18 03:34:00.964641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.521 [2024-11-18 03:34:00.964721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.521 [2024-11-18 03:34:00.964736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:57.521 [2024-11-18 03:34:00.964752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:57.521 [2024-11-18 03:34:00.964764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.521 [2024-11-18 03:34:00.964786] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:57.521 [2024-11-18 03:34:00.964804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:57.521 [2024-11-18 03:34:00.964819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.964990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:57.521 [2024-11-18 03:34:00.965511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.965996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.966008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.966021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.966033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:57.522 [2024-11-18 03:34:00.966057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:57.522 [2024-11-18 03:34:00.966075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b07b35b-3d9f-4fe6-9057-425d629783dd 00:33:57.522 [2024-11-18 03:34:00.966089] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:57.522 [2024-11-18 03:34:00.966101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:33:57.522 [2024-11-18 03:34:00.966112] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:33:57.522 [2024-11-18 03:34:00.966124] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:33:57.522 [2024-11-18 03:34:00.966135] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:57.522 [2024-11-18 03:34:00.966151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:57.522 [2024-11-18 03:34:00.966163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:57.522 [2024-11-18 03:34:00.966173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:57.522 [2024-11-18 03:34:00.966183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:57.522 [2024-11-18 03:34:00.966193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.522 [2024-11-18 03:34:00.966215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:57.522 [2024-11-18 03:34:00.966228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:33:57.522 [2024-11-18 03:34:00.966240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.969623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.522 [2024-11-18 03:34:00.969783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:57.522 [2024-11-18 03:34:00.969865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.362 ms 00:33:57.522 [2024-11-18 03:34:00.969909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.970062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:57.522 [2024-11-18 03:34:00.970642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:57.522 [2024-11-18 03:34:00.971068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:33:57.522 [2024-11-18 03:34:00.971155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.981954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:00.982226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:57.522 [2024-11-18 03:34:00.982648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:00.982814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.982983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:00.983038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:57.522 [2024-11-18 03:34:00.983063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:00.983082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.983147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:00.983216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:57.522 [2024-11-18 03:34:00.983240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:00.983267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.983297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:00.983330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:57.522 [2024-11-18 03:34:00.983377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:00.983402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:00.997033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:00.997178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:57.522 [2024-11-18 03:34:00.997239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:00.997262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:01.009175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:01.009345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:57.522 [2024-11-18 03:34:01.009412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:01.009435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:01.009504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:01.009527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:57.522 [2024-11-18 03:34:01.009548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:01.009568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:01.009622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:01.009646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:57.522 [2024-11-18 03:34:01.009722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.522 [2024-11-18 03:34:01.009746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.522 [2024-11-18 03:34:01.009822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.522 [2024-11-18 03:34:01.009884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:57.523 [2024-11-18 03:34:01.009907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.523 [2024-11-18 03:34:01.009927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.523 [2024-11-18 03:34:01.009963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.523 [2024-11-18 03:34:01.009981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:57.523 [2024-11-18 03:34:01.009990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.523 [2024-11-18 03:34:01.009997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.523 [2024-11-18 03:34:01.010046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.523 [2024-11-18 03:34:01.010055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:57.523 [2024-11-18 03:34:01.010064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.523 [2024-11-18 03:34:01.010072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.523 [2024-11-18 03:34:01.010137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:57.523 [2024-11-18 03:34:01.010150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:57.523 [2024-11-18 03:34:01.010159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:57.523 [2024-11-18 03:34:01.010167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:57.523 [2024-11-18 03:34:01.010337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 49.168 ms, result 0 00:33:57.784 00:33:57.784 00:33:57.784 03:34:01 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:00.331 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:00.331 Process with pid 93270 is not found 00:34:00.331 Remove shared memory files 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93270 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93270 ']' 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93270 00:34:00.331 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93270) - No such process 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93270 is not found' 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_band_md /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_l2p_l1 /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_l2p_l2 /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_l2p_l2_ctx /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_nvc_md /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_p2l_pool /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_sb /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_sb_shm /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_trim_bitmap /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_trim_log /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_trim_md /dev/hugepages/ftl_3b07b35b-3d9f-4fe6-9057-425d629783dd_vmap 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:00.331 03:34:03 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:00.332 03:34:03 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:00.332 ************************************ 00:34:00.332 END TEST ftl_restore_fast 00:34:00.332 ************************************ 00:34:00.332 00:34:00.332 real 5m45.565s 00:34:00.332 user 5m33.136s 00:34:00.332 sys 0m12.157s 00:34:00.332 03:34:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:00.332 03:34:03 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:00.332 Process with pid 84145 is not found 00:34:00.332 03:34:03 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:00.332 03:34:03 ftl -- ftl/ftl.sh@14 -- # killprocess 84145 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@950 -- # '[' -z 84145 ']' 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@954 -- # kill -0 84145 00:34:00.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84145) - No such process 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84145 is not found' 00:34:00.332 03:34:03 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:00.332 03:34:03 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96803 00:34:00.332 03:34:03 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96803 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@831 -- # '[' -z 96803 ']' 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.332 03:34:03 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.332 03:34:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:00.332 [2024-11-18 03:34:03.739681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:00.332 [2024-11-18 03:34:03.740382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96803 ] 00:34:00.332 [2024-11-18 03:34:03.886632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.592 [2024-11-18 03:34:03.960217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.163 03:34:04 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.163 03:34:04 ftl -- common/autotest_common.sh@864 -- # return 0 00:34:01.163 03:34:04 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:01.423 nvme0n1 00:34:01.423 03:34:04 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:01.423 03:34:04 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:01.423 03:34:04 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:01.683 03:34:05 ftl -- ftl/common.sh@28 -- # stores=0ee00606-49ce-4180-95cc-61dea4a3a674 00:34:01.683 03:34:05 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:01.683 03:34:05 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ee00606-49ce-4180-95cc-61dea4a3a674 00:34:01.943 03:34:05 ftl -- ftl/ftl.sh@23 -- # killprocess 96803 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@950 -- # '[' -z 96803 ']' 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@954 -- # kill -0 96803 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@955 -- # uname 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96803 00:34:01.943 killing process with pid 96803 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96803' 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@969 -- # kill 96803 00:34:01.943 03:34:05 ftl -- common/autotest_common.sh@974 -- # wait 96803 00:34:02.204 03:34:05 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:02.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:02.466 Waiting for block devices as requested 00:34:02.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:02.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:02.727 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:02.727 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.015 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:08.015 03:34:11 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:08.015 Remove shared memory files 00:34:08.015 03:34:11 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:08.015 03:34:11 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:08.015 03:34:11 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:08.015 03:34:11 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:08.015 03:34:11 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:08.015 03:34:11 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:08.015 ************************************ 00:34:08.015 END TEST ftl 00:34:08.015 ************************************ 00:34:08.015 00:34:08.015 real 19m8.597s 00:34:08.015 user 20m52.225s 00:34:08.015 sys 1m20.592s 00:34:08.015 03:34:11 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:08.015 03:34:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:08.015 03:34:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:08.015 03:34:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:08.015 03:34:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:08.015 03:34:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:08.015 03:34:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:08.015 03:34:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:08.015 03:34:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:08.015 03:34:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:08.015 03:34:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:08.015 03:34:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:08.015 03:34:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:08.015 03:34:11 -- common/autotest_common.sh@10 -- # set +x 00:34:08.015 03:34:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:08.015 03:34:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:08.015 03:34:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:08.015 03:34:11 -- common/autotest_common.sh@10 -- # set +x 00:34:09.459 INFO: APP EXITING 00:34:09.459 INFO: killing all VMs 00:34:09.459 INFO: killing vhost app 00:34:09.459 INFO: EXIT DONE 00:34:09.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:09.982 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:09.982 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:09.982 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:09.982 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:10.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:10.815 Cleaning 00:34:10.815 Removing: /var/run/dpdk/spdk0/config 00:34:10.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:10.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:10.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:10.815 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:10.815 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:10.815 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:10.815 Removing: /var/run/dpdk/spdk0 00:34:10.815 Removing: /var/run/dpdk/spdk_pid69592 00:34:10.815 Removing: /var/run/dpdk/spdk_pid69750 00:34:10.815 Removing: /var/run/dpdk/spdk_pid69952 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70039 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70062 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70174 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70192 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70374 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70448 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70527 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70627 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70708 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70747 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70784 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70854 00:34:10.815 Removing: /var/run/dpdk/spdk_pid70966 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71385 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71433 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71478 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71490 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71559 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71575 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71633 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71649 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71691 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71709 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71751 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71769 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71897 00:34:10.815 Removing: /var/run/dpdk/spdk_pid71929 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72012 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72174 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72241 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72272 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72694 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72781 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72885 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72923 00:34:10.815 Removing: /var/run/dpdk/spdk_pid72949 00:34:10.815 Removing: /var/run/dpdk/spdk_pid73027 00:34:10.815 Removing: /var/run/dpdk/spdk_pid73639 00:34:10.815 Removing: /var/run/dpdk/spdk_pid73670 00:34:10.815 Removing: /var/run/dpdk/spdk_pid74129 00:34:10.815 Removing: /var/run/dpdk/spdk_pid74222 00:34:10.815 Removing: /var/run/dpdk/spdk_pid74321 00:34:10.815 Removing: /var/run/dpdk/spdk_pid74362 00:34:10.815 Removing: /var/run/dpdk/spdk_pid74383 00:34:10.815 Removing: /var/run/dpdk/spdk_pid74403 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76241 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76361 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76371 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76383 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76427 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76431 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76443 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76488 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76492 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76504 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76549 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76553 00:34:10.815 Removing: /var/run/dpdk/spdk_pid76565 00:34:10.815 Removing: /var/run/dpdk/spdk_pid77930 00:34:10.815 Removing: /var/run/dpdk/spdk_pid78016 00:34:11.076 Removing: /var/run/dpdk/spdk_pid79407 00:34:11.076 Removing: /var/run/dpdk/spdk_pid80784 00:34:11.077 Removing: /var/run/dpdk/spdk_pid80839 00:34:11.077 Removing: /var/run/dpdk/spdk_pid80888 00:34:11.077 Removing: /var/run/dpdk/spdk_pid80943 00:34:11.077 Removing: /var/run/dpdk/spdk_pid81020 00:34:11.077 Removing: /var/run/dpdk/spdk_pid81084 00:34:11.077 Removing: /var/run/dpdk/spdk_pid81221 00:34:11.077 Removing: /var/run/dpdk/spdk_pid81568 00:34:11.077 Removing: /var/run/dpdk/spdk_pid81593 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82034 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82216 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82298 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82397 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82439 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82459 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82759 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82797 00:34:11.077 Removing: /var/run/dpdk/spdk_pid82842 00:34:11.077 Removing: /var/run/dpdk/spdk_pid83208 00:34:11.077 Removing: /var/run/dpdk/spdk_pid83352 00:34:11.077 Removing: /var/run/dpdk/spdk_pid84145 00:34:11.077 Removing: /var/run/dpdk/spdk_pid84261 00:34:11.077 Removing: /var/run/dpdk/spdk_pid84419 00:34:11.077 Removing: /var/run/dpdk/spdk_pid84493 00:34:11.077 Removing: /var/run/dpdk/spdk_pid84759 00:34:11.077 Removing: /var/run/dpdk/spdk_pid84984 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85304 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85474 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85627 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85663 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85867 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85887 00:34:11.077 Removing: /var/run/dpdk/spdk_pid85929 00:34:11.077 Removing: /var/run/dpdk/spdk_pid86166 00:34:11.077 Removing: /var/run/dpdk/spdk_pid86375 00:34:11.077 Removing: /var/run/dpdk/spdk_pid87003 00:34:11.077 Removing: /var/run/dpdk/spdk_pid87958 00:34:11.077 Removing: /var/run/dpdk/spdk_pid88586 00:34:11.077 Removing: /var/run/dpdk/spdk_pid89336 00:34:11.077 Removing: /var/run/dpdk/spdk_pid89486 00:34:11.077 Removing: /var/run/dpdk/spdk_pid89572 00:34:11.077 Removing: /var/run/dpdk/spdk_pid89966 00:34:11.077 Removing: /var/run/dpdk/spdk_pid90021 00:34:11.077 Removing: /var/run/dpdk/spdk_pid90981 00:34:11.077 Removing: /var/run/dpdk/spdk_pid91545 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92304 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92432 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92468 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92522 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92574 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92628 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92820 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92889 00:34:11.077 Removing: /var/run/dpdk/spdk_pid92951 00:34:11.077 Removing: /var/run/dpdk/spdk_pid93007 00:34:11.077 Removing: /var/run/dpdk/spdk_pid93042 00:34:11.077 Removing: /var/run/dpdk/spdk_pid93092 00:34:11.077 Removing: /var/run/dpdk/spdk_pid93270 00:34:11.077 Removing: /var/run/dpdk/spdk_pid93484 00:34:11.077 Removing: /var/run/dpdk/spdk_pid94268 00:34:11.077 Removing: /var/run/dpdk/spdk_pid94992 00:34:11.077 Removing: /var/run/dpdk/spdk_pid95861 00:34:11.077 Removing: /var/run/dpdk/spdk_pid96803 00:34:11.077 Clean 00:34:11.077 03:34:14 -- common/autotest_common.sh@1451 -- # return 0 00:34:11.077 03:34:14 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:11.077 03:34:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:11.077 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:34:11.338 03:34:14 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:11.338 03:34:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:11.338 03:34:14 -- common/autotest_common.sh@10 -- # set +x 00:34:11.338 03:34:14 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:11.338 03:34:14 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:11.338 03:34:14 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:11.338 03:34:14 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:11.338 03:34:14 -- spdk/autotest.sh@394 -- # hostname 00:34:11.338 03:34:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:11.338 geninfo: WARNING: invalid characters removed from testname! 00:34:37.926 03:34:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:39.842 03:34:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:42.391 03:34:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:44.938 03:34:48 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:46.853 03:34:50 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:50.156 03:34:53 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:52.071 03:34:55 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:52.071 03:34:55 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:52.071 03:34:55 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:52.071 03:34:55 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:52.071 03:34:55 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:52.071 03:34:55 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:52.071 03:34:55 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:52.071 03:34:55 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:52.071 03:34:55 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:52.071 03:34:55 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:52.071 03:34:55 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:52.071 03:34:55 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:52.071 03:34:55 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:52.071 03:34:55 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:52.071 03:34:55 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:52.071 03:34:55 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:52.071 03:34:55 -- scripts/common.sh@344 -- $ case "$op" in 00:34:52.071 03:34:55 -- scripts/common.sh@345 -- $ : 1 00:34:52.071 03:34:55 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:52.071 03:34:55 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:52.071 03:34:55 -- scripts/common.sh@365 -- $ decimal 1 00:34:52.071 03:34:55 -- scripts/common.sh@353 -- $ local d=1 00:34:52.071 03:34:55 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:52.071 03:34:55 -- scripts/common.sh@355 -- $ echo 1 00:34:52.071 03:34:55 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:52.071 03:34:55 -- scripts/common.sh@366 -- $ decimal 2 00:34:52.071 03:34:55 -- scripts/common.sh@353 -- $ local d=2 00:34:52.071 03:34:55 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:52.071 03:34:55 -- scripts/common.sh@355 -- $ echo 2 00:34:52.071 03:34:55 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:52.071 03:34:55 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:52.071 03:34:55 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:52.071 03:34:55 -- scripts/common.sh@368 -- $ return 0 00:34:52.071 03:34:55 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:52.071 03:34:55 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:52.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.071 --rc genhtml_branch_coverage=1 00:34:52.071 --rc genhtml_function_coverage=1 00:34:52.071 --rc genhtml_legend=1 00:34:52.071 --rc geninfo_all_blocks=1 00:34:52.071 --rc geninfo_unexecuted_blocks=1 00:34:52.071 00:34:52.071 ' 00:34:52.071 03:34:55 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:52.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.071 --rc genhtml_branch_coverage=1 00:34:52.071 --rc genhtml_function_coverage=1 00:34:52.071 --rc genhtml_legend=1 00:34:52.071 --rc geninfo_all_blocks=1 00:34:52.072 --rc geninfo_unexecuted_blocks=1 00:34:52.072 00:34:52.072 ' 00:34:52.072 03:34:55 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.072 --rc genhtml_branch_coverage=1 00:34:52.072 --rc genhtml_function_coverage=1 00:34:52.072 --rc genhtml_legend=1 00:34:52.072 --rc geninfo_all_blocks=1 00:34:52.072 --rc geninfo_unexecuted_blocks=1 00:34:52.072 00:34:52.072 ' 00:34:52.072 03:34:55 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.072 --rc genhtml_branch_coverage=1 00:34:52.072 --rc genhtml_function_coverage=1 00:34:52.072 --rc genhtml_legend=1 00:34:52.072 --rc geninfo_all_blocks=1 00:34:52.072 --rc geninfo_unexecuted_blocks=1 00:34:52.072 00:34:52.072 ' 00:34:52.072 03:34:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:52.072 03:34:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:52.072 03:34:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:52.072 03:34:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.072 03:34:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.072 03:34:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.072 03:34:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.072 03:34:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.072 03:34:55 -- paths/export.sh@5 -- $ export PATH 00:34:52.072 03:34:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.072 03:34:55 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:52.072 03:34:55 -- common/autobuild_common.sh@479 -- $ date +%s 00:34:52.072 03:34:55 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731900895.XXXXXX 00:34:52.072 03:34:55 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731900895.bMqDQz 00:34:52.072 03:34:55 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:34:52.072 03:34:55 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:34:52.072 03:34:55 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:34:52.072 03:34:55 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:34:52.072 03:34:55 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:52.072 03:34:55 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:52.072 03:34:55 -- common/autobuild_common.sh@495 -- $ get_config_params 00:34:52.072 03:34:55 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:52.072 03:34:55 -- common/autotest_common.sh@10 -- $ set +x 00:34:52.072 03:34:55 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:34:52.072 03:34:55 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:34:52.072 03:34:55 -- pm/common@17 -- $ local monitor 00:34:52.072 03:34:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.072 03:34:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:52.072 03:34:55 -- pm/common@25 -- $ sleep 1 00:34:52.072 03:34:55 -- pm/common@21 -- $ date +%s 00:34:52.072 03:34:55 -- pm/common@21 -- $ date +%s 00:34:52.072 03:34:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731900895 00:34:52.072 03:34:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731900895 00:34:52.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731900895_collect-cpu-load.pm.log 00:34:52.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731900895_collect-vmstat.pm.log 00:34:53.015 03:34:56 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:34:53.015 03:34:56 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:53.015 03:34:56 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:53.015 03:34:56 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:53.015 03:34:56 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:53.015 03:34:56 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:53.015 03:34:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:53.015 03:34:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:53.015 03:34:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:53.015 03:34:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:53.015 03:34:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:53.015 03:34:56 -- pm/common@44 -- $ pid=98476 00:34:53.015 03:34:56 -- pm/common@50 -- $ kill -TERM 98476 00:34:53.015 03:34:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:53.015 03:34:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:53.015 03:34:56 -- pm/common@44 -- $ pid=98477 00:34:53.015 03:34:56 -- pm/common@50 -- $ kill -TERM 98477 00:34:53.015 + [[ -n 5766 ]] 00:34:53.015 + sudo kill 5766 00:34:53.026 [Pipeline] } 00:34:53.042 [Pipeline] // timeout 00:34:53.047 [Pipeline] } 00:34:53.062 [Pipeline] // stage 00:34:53.067 [Pipeline] } 00:34:53.081 [Pipeline] // catchError 00:34:53.091 [Pipeline] stage 00:34:53.093 [Pipeline] { (Stop VM) 00:34:53.106 [Pipeline] sh 00:34:53.391 + vagrant halt 00:34:55.928 ==> default: Halting domain... 00:35:01.227 [Pipeline] sh 00:35:01.508 + vagrant destroy -f 00:35:03.448 ==> default: Removing domain... 00:35:04.407 [Pipeline] sh 00:35:04.692 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:04.702 [Pipeline] } 00:35:04.716 [Pipeline] // stage 00:35:04.721 [Pipeline] } 00:35:04.738 [Pipeline] // dir 00:35:04.743 [Pipeline] } 00:35:04.760 [Pipeline] // wrap 00:35:04.766 [Pipeline] } 00:35:04.779 [Pipeline] // catchError 00:35:04.790 [Pipeline] stage 00:35:04.792 [Pipeline] { (Epilogue) 00:35:04.808 [Pipeline] sh 00:35:05.098 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:10.386 [Pipeline] catchError 00:35:10.388 [Pipeline] { 00:35:10.400 [Pipeline] sh 00:35:10.687 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:10.687 Artifacts sizes are good 00:35:10.698 [Pipeline] } 00:35:10.714 [Pipeline] // catchError 00:35:10.726 [Pipeline] archiveArtifacts 00:35:10.733 Archiving artifacts 00:35:10.825 [Pipeline] cleanWs 00:35:10.838 [WS-CLEANUP] Deleting project workspace... 00:35:10.838 [WS-CLEANUP] Deferred wipeout is used... 00:35:10.845 [WS-CLEANUP] done 00:35:10.847 [Pipeline] } 00:35:10.863 [Pipeline] // stage 00:35:10.868 [Pipeline] } 00:35:10.882 [Pipeline] // node 00:35:10.888 [Pipeline] End of Pipeline 00:35:10.931 Finished: SUCCESS